Welcome to this issue of Digitalisation World, which includes a major focus on Smart Cities. As with so many of the digital transformation technologies and strategies currently being developed and/or implemented right across the globe, the jury is out as to what will, or won’t, be acceptable in the smart cities of the future. Clearly, intelligent automation has a massive role to play, but this inevitably brings with it concerns over security. And while the world continues merrily on its way despite the almost daily news announcements of various types and sizes of security breaches, the more critical the applications, the more critical security does become.
A bank’s customers lose some money, or lose access to their money, or have their financial details hacked – none of this is great news but, hopefully, no one dies as a result. Entrust a city’s utilities and transport infrastructure to automation, and security becomes a whole deal more critical – lives are at risk. That’s not a reason not to do it, but the risks and the rewards need to be evaluated extremely carefully before any decisions are made.
Alongside this growing security issue, we have the equally important consideration of…human beings. Just what will we, or won’t we, accept in terms of having our lives ordered, measured, monitored and, ultimately, governed, by technology – most noticeably, what many might see as the increasing privacy invasion which allows for very little human activity to go ‘under the radar’? From the moment an individual wakes up, and starts using water, heating, food supplies, through the journey to work (wherever that might be), time spent at work, lunch time, to the end of the day’s downtime, spent watching, listening, eating, playing sport, there’s every chance that all of these activities will be monitored and measured – ostensibly to provide valuable feedback to the supply industry, so that they can fine tune and improve our experiences.
Conspiracy theorists are having a field day in terms of focusing on the links (imagined and real) between some of the hyperscale technology providers and various governments around the world, but on a more mundane basis, do we want, or should we have, to endure a world where, say, your insurer is constantly adjusting your health premium based on the lifestyle you lead?
On the flip side, technology allows us to create our own world, where we can shut out huge amounts of potentially important information, by simply choosing not to view them – as in the case of choosing what news we receive. That’s not particularly healthy.
So, we arrive at the subject of ethics. Indeed, hardly a day passes when I don’t receive one or more press release focusing on the topic of AI and ethics. Call me cynical, but big business and ethics don’t tend to mix very well, so I’ve no great expectations as to the likely winner in the digital transformation ‘battle’ between what’s possible and what’s acceptable. So, a smart city as portrayed in Blade Runner might seem a long way off right now, but, if we haven’t wiped out the planet in the mean-time, it’s a definite possibility in the future!
A new survey has shown that 88% of companies believe ‘self-service’ will be the fastest growing channel in customer service by 2021.
The State of Native Customer Experience Report, revealed at Unbabel’s Customer Centric Conference 2019, details the opinions of senior executives surveyed at global companies (including several Fortune 500 organizations),regarding their worldwide multilingual customer support operations across the technology, retail, travel, finance, business services and entertainment sectors.
The survey was run by Execs In The Know, a global community of customer experience professionals,having been commissioned by multilingual support provider Unbabel.
The report reveals that ‘average speed of answer’ is no longer the gold standard by which customer support is measured. When asked which general factors had the highest impact on customer satisfaction, nearly all respondents (92%) rated “solving the customer’s problem” as having the most impact, followed by providing “knowledgeable support agents” (64%) with “speed of case resolution” (62%) only third most important.
In other words, first-time resolution — delivered by agents well-equipped to understand and address customer queries — has emerged as the key performance metric. More than 80% of companies surveyed stated that they were investing in chatbots to meet these growing levels of customer demand.
“In a highly competitive landscape, delivering high-quality support in a customers’ native language can be a major differentiator for businesses,” said Chad McDaniel, Execs In The Know President and Co-Founder. “With 76% of the respondents expecting live chat to increase and 70% expecting social media volumes to increase over the next two years, businesses need to ensure their native language strategy is effectively servicing all customers in all channels.”
The survey also provided a definitive ranking for the most expensive languages to cover in customer support. Japanese ranked first as the most expensive language, followed by German, French, and Chinese.
When asked, “What is your biggest and most painful challenge in terms of languages and customer service?” nearly half of survey respondents (47%) identified sourcing and agent retention as the primary pain point.
While talent retention and sourcing for high-demand customer service languages remains a priority for global organizations, technological advances in natural language processing and machine learning are paving the way for more intelligent virtual assistants that can accommodate changes and fluctuations in customer demand.
Unbabel CEO Vasco Pedro commented: “This research makes it clear that the channels where customer interactions take place are evolving along with technological and cultural shifts. In line with Unbabel’s roadmap, the vast majority of respondents expect self-service volumes to increase over the next two years, followed by live chat, social media platforms and text messaging. It makes sense: as primarily digital consumers establish more buying power, they are also demanding comprehensive digital support. Today, it’s more important than ever for businesses to adapt to a globalized economy and serve their customers quickly, cost-effectively, and above all in their native language.”
Business efficiency increases by two thirds when technology is implemented with a supporting culture and long-term digital vision.
Research from Oracle and the WHU - Otto Beisheim School of Management has shown business efficiency increases by two thirds when the right technology is implemented alongside seven key factors. According to the research, many organisations have invested in the right technologies, but are lacking the culture, skills or behaviours necessary to truly reap their benefits. The study found business efficiency only increases by a fifth when technology is implemented without the identified seven factors.
The seven key factors are: data-driven decision making, flexibility & embracing change, entrepreneurial culture, a shared digital vision, critical thinking & questioning, learning culture and open communication & collaboration.
The new research questioned 850 HR Directors as well as 5,600 employees on the ways organisations can adapt for a competitive advantage in the digital age. The study showed that achieving business efficiency is critical to becoming an agile organisation that can keep pace with change, with 42% of businesses reporting an overall increase in organisational performance once business efficiency was achieved.
“Pace of change has never been more important for organisations than it is in the current climate,” says Wilhelm Frost, from the Department for Industrial Organization and Microeconomics at WHU - Otto Beisheim School of Management. “Adaptability and agility are extremely important for organisations if they want to get ahead of the competition and offer market-leading propositions. Being adaptable means better support for customers, and needs to happen to meet their needs, but it’s also a big factor in any company attracting and retaining employees with the skills to drive them forward. Companies unprepared for the relentless pace of change will simply not be able to compete for skills in today’s digital marketplace.”
The research showed a third of business leaders worldwide don’t think they are currently operating in a way to attract – or compete for – talent. This went up to over half of business leaders in markets such as India, Brazil and Chile. While, a quarter of employees worldwide said they were worried about losing their jobs to machines.
“The study highlights the opportunity for HR to step up and lead workforce transformation by allowing the productivity benefits of technology to be realised,” said Joachim Skura, Strategy Director HCM Applications, Oracle. “Too many organisations are implementing technology but not properly integrating it in to the business. Human workers still fear it’s them versus the machines when, in fact, organisational growth will come from the two working together. With any technology implementation, there needs to be both a culture change and upskilling of staff to work with machines and technology. It’s these digital skills that make up the seven factors needed to realise the true benefits of any technology and become an adaptable business.”
Findings show there is an expectation gap between expected benefits of the public cloud and what it’s actually delivering for enterprises today.
Cohesity has published results of a global survey of 900 senior IT decision makers that shows a major expectation gap exists between what IT managers hoped the public cloud would deliver for their organisations and what has actually transpired.
More than 9 in 10 respondents across the UK believed when they started their journey to the cloud, it would simplify operations, increase agility, reduce costs and provide greater insight into their data. However, of those that felt the promise of public cloud hadn’t been realised, 95 percent believe it is because their data is greatly fragmented in and across public clouds and could become nearly impossible to manage long term. (Q7).
“While providing many needed benefits, the public cloud also greatly proliferates mass data fragmentation,” said Raj Rajamani, VP Products, Cohesity. “We believe this is a key reason why 38 percent of respondents say their IT teams are spending between 30-70 percent of their time managing data and apps in public cloud environments today.” (Q10)
There are several factors contributing to mass data fragmentation in the public cloud. First, many organisations have deployed multiple point products to manage fragmented data silos, but that can add significant management complexities. The survey, commissioned by Cohesity for Vanson Bourne, found that nearly half (42 percent) are using 3-4 point products to manage their data – specifically backups, archives, files, test/dev copies, - across public clouds today, while nearly a fifth (19 percent) are using as many as 5-6 separate solutions (Q4). Respondents expressed concerns about using multiple products to move data between on-premises and public cloud environments, if those products don’t integrate. 59 percent are concerned about security, 49 percent worry about costs and 44 percent are concerned about compliance. (Q23).
Additionally, data copies can increase fragmentation challenges. A third of respondents (33 percent) have four or more copies of the same data in public cloud environments, which can not only increase storage costs but create data compliance challenges.
"The public cloud can empower organisations to accelerate their digital transformation journey, but first organisations must solve mass data fragmentation challenges to reap the benefits,” continued Rajamani. “Businesses suffering from mass data fragmentation are finding data to be a burden, not a business driver.”
Disconnect between senior management and IT
IT leaders are also struggling to comply with mandates from senior business leaders within their organisation. Almost nine in ten (87 percent) respondents say that their IT teams have been given a mandate to move to the public cloud by senior management (Q24), However, nearly half of those respondents (42 percent) say they are struggling to come up with a strategy that effectively uses the public cloud to the complete benefit of the organisation.
“Nearly 80 percent of respondents stated their executive team believes it is the public cloud service provider's responsibility to protect any data stored in public cloud environments, which is fundamentally incorrect,” said Rajamani. “This shows executives are confusing the availability of data with its recoverability. It’s the organisation’s responsibility to protect its data.”
Eliminating fragmentation unlocks opportunities to realise the promise of the cloud
Despite these challenges, more than nine in ten (91 percent) believe that the public cloud service providers used by their organisation offer a valuable service. The vast majority (97 percent) expect that their organisation’s public cloud-based storage will increase by 94 percent on average between 2018 and the end of 2019.
Nearly nine in ten (88 percent) believe the promise of the public cloud can be better realised if solutions are in place that can help them solve mass data fragmentation challenges across their multi-cloud environments. (Q26). Respondents believe there are numerous benefits that can be achieved by tackling data fragmentation in public cloud environments, including: generating better insights through analytics / artificial intelligence (46 percent), improving the customer experience (46 percent), and maintaining or increasing brand reputation and trust by reducing risks of compliance breaches (43 percent). (q30).
“It’s time to close the expectation gap between the promise of the public cloud and what it can actually deliver to organisations around the globe,” said Rajamani. “Public cloud environments provide exceptional agility, scalability and opportunities to accelerate testing and development, but it is absolutely critical that organisations tackle mass data fragmentation if they want the expected benefits of cloud to come to life.”
The Internet of Things (IoT) is the emerging technology that offers the most immediate opportunities to generate new business and revenues, according to the Emerging Technology Community at CompTIA, the leading trade association for the global tech industry.
The community has released its second annual Top 10 Emerging Technologies list, ranked according to the near-term business and financial opportunities the solutions offer to IT channel firms and other companies working in the business of technology.
The Internet of Things also topped the community’s 2018 Top 10 list.
“Everybody in the technology world, as well as many consumers, is hearing the term Internet of Things,” said Frank Raimondi, a member of the CompTIA Emerging Technology Community leadership group who works in strategic channel and business development for Chargifi.
“To say it’s confusing and overwhelming is an understatement,” Raimondi continued. “IoT may mean many things to many people, but it can clearly mean incremental or new business to a channel partner if they start adding relevant IoT solutions with their existing and new customers. More importantly, they don’t have to start over from scratch.”
Artificial intelligence (AI) ranks second on the 2019 list.
“The largest impacts across all industries – from retail to healthcare, hospitality to finance – are felt when AI improves data security, decision-making speed and accuracy, and employee output and training,” said Maddy Martin, head of growth and education for Smith.ai and community vice chair.
“With more capable staff, better-qualified sales leads, more efficient issue resolution, and systems that feed actual data back in for future process and product improvements, companies employing AI technologies can use resources with far greater efficiency,” Martin added. “Best of all, as investment and competition increase in the AI realm, costs are reduced.”
Third on this year’s list of top emerging technologies is 5G wireless.
“The development and deployment of 5G is going to enable business impact at a level few technologies ever have, providing wireless at the speed and latency needed for complex solutions like driverless vehicles,” said Michael Haines, director of partner incentive strategy and program design for Microsoft and community chair.
“Additionally, once fully deployed geographically, 5G will help emerging markets realize the same ‘speed of business’ as their mature counterparts,” Haines commented. “Solution providers that develop 5G-based solutions for specific industry applications will have profitable, early-mover advantages.”
Also on the top 10 list is blockchain, coming in at number five this year.
“Blockchain came down crushing from its peak of hype cycle, and that’s probably for the best,” said Julia Moiseeva, founder of CLaaS (C-Level as a Service) Management Solutions Ltd. and member of the community’s leadership group. “Now that the luster of novelty and furor of the masses are gone, the dynamic of work around blockchain took a complete U-turn, again, for the best.”
“Now we observe players in this space building blockchain-based solutions in response to the real industry problems,” Moiseeva explained. “The trend of blockchain as a service (BaaS) is the one to watch. BaaS will be the enabler of significant revenue and cost-saving opportunities for cross-industry participants, especially those who don’t have the know-how or R&D to develop their own blockchain. We are moving toward plug-and-play product suites.”
Two new technologies, serverless computing and robotics, made the 2019 list, replacing automation and quantum computing.
Densify has published the findings of a global enterprise cloud survey of IT professionals.
The survey found that the top priorities for most organisations, when it comes to deploying workloads in the cloud, is all about the applications. They are focused on how to ensure applications perform well, how to keep the environments secure, and how to make sure they accomplish these goals within budget. With 55% of the respondents coming from enterprises with over 1,000 employees, these global organisations have concerns over how to ensure apps function well in the cloud. 66% of the organisations are running multi-cloud environments, with the clear majority winner being AWS (70% usage), followed by Azure (57% usage) and Google Cloud Platform at (31% usage). On-prem private cloud users were at 37%.
Container technology is rapidly being adopted to run applications and microservices, with 44% of the respondents already running containers, and another 24% looking into containers. Six months ago, when Densify conducted the last market survey, the percentage of people already running containers was at 19%, showing a strong growth in container adoption in just the last half year. As for which container platforms they run on, the top technology is from AWS with Amazon Elastic Container Service (ECS) and Amazon Elastic Container Services for Kubernetes (EKS), with 56% of the audience using one of these two. However, Kubernetes in general is the most popular being used on AWS, Azure, Google and IBM Cloud.
While enterprises have fully embraced and adopted the cloud and containers, there are some issues in the way people adopt the cloud, that can introduce risk to their businesses. A surprisingly large number of participants (40%) shared that they are not certain or up to speed with the latest cloud technologies from the cloud providers, or how to leverage them for their own success. When asked about how they decide and select the optimal cloud resources to run their applications, 55% reported “best guess” and “tribal knowledge” as their main strategy.
Summary of Key Findings:
o Container adoption is on the rise from 19% to 44% adoption in the last 6 months
o Managed Kubernetes services (e.g. Amazon’s EKS, Azure’s AKS, Google’s GKE) are the winning platforms (66%) when it comes to container management and orchestration, followed by Amazon Elastic Container Services (ECS).
o More than 55% are using best guess or tribal knowledge to specify container CPU and memory Request and Limit values, which is a major issue for running containers successfully
“The cloud is really complex, with hundreds of services from each cloud vendor, making the selection of the right services for business super difficult,” said Yama Habibzai, CMO, Densify. “It really is humanly impossible to align the workload and application demands to the right cloud resources, without automation and analysis.”
A primary objective of these enterprises is to control their spend with cloud, with this issue being their third most important objective based on the survey results. But the data shows that 45% of the audience indicated that they are spending more than they have budgeted, with 20% spending more than $1.2 million per year. Furthermore, 55% of the audience are using manual efforts to select their workloads, guessing on their selection, which directly drives up their cloud risk and spend.
When asked if automating the optimal cloud/container resource selection could help them with achieving their objectives, 80% responded favourably, that it can help them in reducing application risk, and driving down their cloud spend.
Securing the Internet of Things (IoT) is something which cannot be done with a one-size-fits-all approach – and every kind of connected object must be assessed individually, the Co-chair of Trusted Computing Group’s (TCG) Embedded Systems Work Group said recently.
Speaking at the Embedded Technologies Expo and Conference 2019, Steve Hanna highlighted how the growing trend for greater connectivity puts everyday objects at risk of exploitation and makes mission critical systems in businesses and Governments more vulnerable to attacks.
And while securing the IoT is often referred to as a singular movement, Hanna emphasized that every device had to be handled according to its individual needs, warning that there would be no single method that could be universally applied to safeguard devices.
“When you consider other security systems, for example a lock, what you would use for a front door is very different to what would be used for a bank or a Government building because the scale of an attack would be much greater and more complex in the case of the latter,” he said. “The same is true for computers and embedded systems; when we think about security, we have to think about different levels that correspond to the level of risk.”
Hanna illustrated his point by comparing a baby monitor with a chemical plant – both of which are likely to become connected as standard in the near-future. For the latter, he said, the impact of an attack could be as serious as an explosion which would ultimately endanger human life.
“While it is important to secure things like baby monitors, for example, to avoid the devices being used to eavesdrop on conversations, there is a price point that needs to be met as well – no one is going to spend thousands of dollars on a baby monitor and for the manufacturers, that means the security solution needs to be less expensive,” continued Hanna. “In the case of a chemical plant, the risk is much greater, the level of attack is likely to be more sophisticated and a serious amount of money could have been invested in carrying it out. As a result, the security measures need to be much more stringent.”
Hanna went on to explain that the customized security approach required by the Internet of Things can be easily achieved using technologies that are available today. TCG’s security standards are all based on the concept of Trusted Computing where a Root of Trust forms the foundation of the device and meets the specific requirements of the device or deployment.
“TCG’s wide variety of security options provide the building blocks to create secure systems,” said Hanna. “In the case of a chemical plant, industrial-grade discrete TPM hardware can be built in not just into the plant’s firewall but also into the control system. This will enable these systems to be monitored in real-time and for even sophisticated attacks to be identified and prevented. For devices which are less high-risk, TPM firmware can be created which has the same set of commands but is less rigorously secured and therefore more cost-effective. Finally, for very tiny devices that can’t afford TPM firmware, DICE offers a good alternative.”
Updated forecast projects growth at edge and precipitous talent drain.
Five years ago,Vertiv led a global, industry-wide examination of the data center of the future. Data Center 2025: Exploring the Possibilities, stretched the imaginations of more than 800 industry professionals and introduced a collaborative vision for the next-generation data center. Today, Vertiv released a mid-point update – Data Center 2025: Closer to the Edge – and it reveals fundamental shifts in the industry that barely registered in the forecasts from five short years ago.
The migration to the edge is changing the way today’s industry leaders think about the data center. They are grappling with a broad data center ecosystem comprised of many types of facilities and relying increasingly on the edge of the network. Of participants who have edge sites today or expect to have edge sites in 2025, more than half (53%) expect the number of edge sites they support to grow by at least 100%, with 20% expecting a 400% or more increase. Collectively, survey participants expect their total number of edge computing sites will grow 226% between now and 2025.
During the original 2014 research, the edge was acknowledged as a growing trend but merited just four mentions in the 19-page report. The industry’s attention at that point was focused firmly on hybrid architectures leveraging enterprise, cloud and colocation resources. Even in an industry that routinely moves and changes at light speed, the growth of the edge and the dramatic impact it will have on the data center is staggering.
“In just five short years, we have seen the emergence of an entirely new segment of the ecosystem, driven by the need to locate computing closer to the user,” said Rob Johnson, Vertiv CEO. “This new distributed network is reliant on a mission-critical edge that has fundamentally changed the way we think about the data center.”
“Making predictions about technology shifts more than two or three years ahead is challenging, but this research aligns with the vision of an ever-changing and incredibly dynamic market which is unfolding in front of our eyes,” said Giordano Albertazzi, president for Vertiv in Europe, Middle East and Africa. “Specifically, the estimates for future growth in edge computing are consistent with the predicted growth in AI, IoT and other latency and bandwidth dependent applications. The challenge – especially given the shortage in data center personnel – will be managing all of that new infrastructure effectively and efficiently. Remote management and approaches such as lights-out data centers will play an increasingly important role.”
More than 800 data center professionals participated in the survey. Among the other notable results:
Zero trust technologies that enable seamless, secure user authentication are critical; 6 in 10 employees are disrupted, irritated, frustrated and waste time on passwords.
MobileIron has revealed the results of a survey conducted with IDG, which found that enterprise users and security professionals alike are frustrated by the inefficiency and lax security of passwords for user authentication. With 90% of security professionals reporting to have seen unauthorized access attempts as a result of stolen credentials, it’s clear that that the future of security requires a next generation of authentication that’s more secure.
Mobile devices are the best option for replacing passwords, as they remain at the centre of enterprises in terms of where business is done, how access is given, and how authentication is done. In fact, the survey revealed that almost nine-in-ten (88%) security leaders believe that mobile devices will soon serve as digital ID to access enterprise services and data.
The survey, sponsored by MobileIron, polled 200 IT security leaders in the US, UK, Australia, and New Zealand working in a range of industries at companies with at least 500 employees. The aim was to discover the major authentication pain points facing enterprises.
“It’s time to say goodbye to passwords. They not only cause major frustrations for users and IT professionals, but they also pose major security risks,” said Rhonda Shantz, Chief Marketing Officer at MobileIron. “That’s why MobileIron is ushering in a new era of user authentication with a mobile ID and zero sign-on experience from any device, any OS, any location, to any service. With more and more users accessing apps and company data via their own mobile devices, it’s not only easier to leverage mobile devices than passwords for user authentication – it’s also much more secure.”
The perils of passwords:
The merits of mobile:
Okta research shows workers are ready to go passwordless this year.
Okta has debuted The Passwordless Future Report, which demonstrates how passwords negatively impact the security of organisations and mental health of employees. The research, which surveyed 4,000+ workers across the UK, France and the Netherlands, also found that there is a readiness for passwordless security methods such as biometrics, with 70% workers believing biometrics would benefit the workplace.
Dr. Maria Bada, Research Associate, Cambridge University said, ‘’Okta’s research clearly showed that employees can experience negative emotions and stress due to forgetting a password and that can impact not only their career but also their emotional health. And this is not due to forgetting a password but due to using an insecure method to remember passwords. Biometric technology can be promising in creating a passwordless future, but it's essential to create an environment of trust, while ensuring privacy and personal data protection.’’
Passwords are the ideal targets for cyber crime
The majority of hacking-based breaches are a result of reused, stolen or weak passwords. Okta’s research found that in total, 78% of respondents use an insecure method to help them remember their password and this rises to 86% among 18-34 year olds. Some of these memory aids include:
Dr. Bada said, “Passwords are often quite revealing. They are created on the spot, so users might choose something that is readily to mind or something with emotional significance. Passwords tap into things that are just below the surface of consciousness. Criminals take advantage of this and with a little research they can easily guess a password.”
Passwords impact mental health in the workplace
Anxiety is on the rise in the workplace due to several factors, but security is one that has flown under the radar. The Passwordless Future Report found that 62% of respondents feel stressed or annoyed as a result of forgetting their password. This was highest in the UK (69%), compared with France (65%) and the Netherlands (53%). The average worker must remember a total of 10 passwords in everyday life which evokes negative emotions in two-thirds of respondents (63%).
Dr. Bada said, “The potential impact from forgetting a password can cause extreme levels of stress, which over time can lead to breakdown or burnout. That is due to our brains being sensitive to perceived threats. Being constantly focused on potential threats online causes us to become hypersensitive to stress. In the long term that can cause mental health problems.”
The future is passwordless
By combining methods such as biometrics and machine learning with strong authentication, organisations can remove inadequate gateways like passwords altogether.
A staggering 70% of respondents feel there are advantages to using biometric technology in the workplace. This is the highest in France (78%) and with 18-34 year olds across all regions (81%). Almost one-third (32%) feel that biometric technology could make their day-to-day life easier or reduce their stress and anxiety levels in the workplace. However, 86% of respondents have some reservations about sharing biometrics with their employers, demonstrating that workers are ready for the ease of use, but do not trust organisations to protect their data.
Todd McKinnon, CEO and co-founder of Okta concluded, “At Okta, we believe deeply in the potential for technology, and that for organisations of all sizes and industries attempting to become technology companies, trust is the new frontier. Today, businesses need to adopt technology that enables them to innovate quickly, while prioritising the security, privacy, and consent controls that help them to be trusted. Passwords have failed us as an authentication factor, and enterprises need to move beyond our reliance on this ineffective method. In 2019, we will see the first wave of organisations going completely passwordless and Okta’s customers will be at the forefront.”
New EfficientIP report, in partnership with IDC, shows 34% increase in attacks.
EfficientIP, a leading specialist in DNS security for service continuity, user protection and data confidentiality, has published the results of its 2019 Global DNS Threat Report, sponsored research conducted by market intelligence firm IDC.
Over the past year, organizations faced on average more than nine DNS attacks, an increase of 34%. Costs too went up 49%, meaning one in five businesses lost over $1 million per attack and causing app downtime for 63% of those attacked. Other issues highlighted by the study, now in its fifth year, include the broad range and changing popularity of attack types, ranging from volumetric to low signal, including phishing, 47%, malware-based attacks, 39%, and old-school DDoS, 30%.
Also highlighted were the greater consequences of not securing the DNS network layer against all possible attacks. No sector was spared, leaving organizations open to a range of advanced effects from compromised brand reputation to losing business.
Romain Fouchereau, Research Manager European Security at IDC, says “With an average cost of $1m per attack, and a constant rise in frequency, organisations just cannot afford to ignore DNS security and need to implement it as an integral part of the strategic functional area of their security posture to protect their data and services.”
DNS is a central network foundation which enables users to reach all the apps they use for their daily work. Most network traffic first goes through a DNS resolution process, whether this is legitimate or malicious network activity. Any impact on DNS performance has major business implications. Well-publicized cyber attacks such as WannaCry and NotPetya caused financial and reputational damage to organizations across the world. The impact caused by DNS-based attacks is as important due to its mission-critical role.
The top impacts of DNS attacks - damaged reputation, business continuity and finances
Three-in-five, 63%, of organizations suffered application downtime, 45% had their websites compromised, and one-quarter, 27%, experienced business downtime as a direct consequence. These could all potentially lead to serious NISD (Network and Information Security Directive) penalties. In addition, one-quarter, 26%, of businesses had lost brand equity due to DNS attacks.
Data theft via DNS continues to be a problem. To protect against this, organizations are prioritizing securing network endpoints, 32%, and looking for better DNS traffic monitoring, 29%.
David Williamson, CEO of EfficientIP summarized the research “While these figures are the worst we have seen in five years of research, the good news is that the importance of DNS is at last being widely recognized by businesses. Mainstream organizations are now starting to leverage DNS as a key part of their security strategy to help with threat intelligence, policy control and automation, thus building a good foundation for their zero trust plan."
New data from Synergy Research Group shows that just 20 metro areas account for 56% of worldwide retail colocation revenues. Ranked by revenue generated in Q1 2019, the top five metros are Tokyo, New York, London, Washington and Shanghai, which in aggregate account for 25% of the worldwide market.
The next 15 largest metro markets account for another 31% of the market. Those top 20 metros include eight in North America, seven in the APAC region, four in EMEA and one in Latin America. In Q1 Equinix was the retail colocation market leader by revenue in 14 of the top 20 metros, with NTT being the only other operator to lead in more than one of the top metros. In wholesale colocation there is a somewhat different mix and ranking of metros, but the market is even more concentrated with the top 20 metros accounting for 71% of worldwide revenue. North America features more heavily in wholesale and accounts for eleven of the top 20 metros. Digital Realty is the leader in eight of the top 20 wholesale markets and Global Switch the leader in three others. Other colocation operators that feature heavily in the top 20 metros include 21Vianet, @Tokyo, China Telecom, CoreSite, CyrusOne, Interxion, KDDI, SingTel and QTS.
Over the last twelve quarters the top 20 metro share of the worldwide retail colocation market has been relatively constant at around the 55-56% mark, despite a push to expand data center footprints and to build out more edge locations. Among the top 20 metros, those with the highest retail colocation growth rates (measured in local currencies) are Sao Paulo, Sydney, Beijing, Shanghai and Frankfurt, all of which had a rolling annualized growth rate of over 15%. While the US didn’t feature among the highest growth metros for retail colocation, on the wholesale side both Washington/Northern Virginia and Silicon Valley are growing at double-digit rates.
“We continue to see robust demand for colocation across the board, with the standout regional growth numbers coming from APAC and the highest segment-level growth coming from colocation services for hyperscale operators,” said John Dinsdale, a Chief Analyst and Research Director at Synergy Research Group. “It is particularly noteworthy that the market remains concentrated around the most important economic hubs, reflecting the importance of proximity to major customers. Hyperscale operators often focus their own large data center builds away from the major metros, in areas where real estate prices and operating costs are much lower, so they too will increasingly rely on colocation providers to help target clients in key metros. The large metros will maintain their share of the colocation market over the coming years.”
One of the features of the rising force that is managed services, is how many other IT organisations want to be its friend. Latest, and by no means new-comers to the party are the distributors. As they move quickly from being mere product suppliers, many are recognising that they have a changed role that requires them to adapt to the supply of services, which also affects their contract conditions.
At the industry’s leading regional distributor forum, the European Summit of the GTDC (Global Technology Distribution Council) which globally represents over $150bn of annual sales, they talked about how they are working with providers of managed services. And acknowledged that there are still some areas of the relationship to work through.
Tim Henneveld of distributor TIM wants the process of change to happen faster: “There are always things that can be done better. Among the challenges that we see is one that has always been there - that sometimes things take too long to get done or to get changed.”
The challenge coming out of the MSP business is also working out how to integrate MSP programmes with vendors’ legacy programs for the channel, he says. “So usually there is a channel model and then there's an MSP model and these businesses are growing together. And this has to be reflected from our perspective and in the channels with the vendors.”
A second challenge is how cloud services are resold by the vendors through distributors. This covers licensing and financing but that's a small part of it, says Tim Henneveld. It's more about the contract. “So we see challenges in risk allocation. We always have an issue that contracts for cloud services that means using a certain court in a certain country and a code for regulation and there are issues of a limitation of liability offered in other countries.”
“That risk discussion and who takes what risk in the reselling of cloud is something that is today from our opinion not solved,” he concluded.
Eric Nowak, President Arrow ECS EMEA: “Predictability means both sides have their own responsibilities and this calls for consistency. This is a long-term partnership that we have as distributors,”
And then there is how distributors market themselves to the channels including MSPs, as Miriam Murphy, SVP of Tech Data says: “It is now much more of a solutions-sell environment. The really important thing as we build that out is that our vendor partners are very open to collaborating with each other in activities that we do. They should be open to investing in marketing funding and enablement funding to support multi-vendor solutions and also in solutions that involve services.”
Which is all good news for MSPs, though in such a fast-changing industry, they need to be on top of trends and new offerings, especially those coming through distribution. The next snapshot of the industry will be revealed at the London Managed Services & Hosting Summit 2019 on 18th September (https://mshsummit.com/), and at the similar Manchester event on 30 October (https://mshsummit.com/north/).
The Managed Services & Hosting Summits are firmly established as the leading Managed Services event for the channel and feature conference session presentations by major industry speakers and a range of sessions exploring both technical and sales/business issues.
Now in its ninth year, the London Managed Services & Hosting Summit 2019 aims to provide insights into how managed services continues to grow and change as customer demands expand suppliers into a strategic advisory role, and the pressures for compliance and resilience impact the business model at a time of limited resources. Managed Service Providers, other channels and their suppliers can evolve new business models and relationships but are looking for advice and support as well as strategic business guidance.
Reflecting the transformational nature of the enterprise technology world which it serves, this year’s 10th edition of Angel Business Communications’ premier IT awards has a new name. The SVC Awards have become... the SDC Awards!
10 years ago, SVC stood for Storage, Virtualisation and Channel – and the SVC Awards focused on these important pillars of the overall IT industry. Fast forward to 2019, and virtualisation has given way to software-defined, which, in turn, has become an important sub-set of digital transformation. Storage remains important, and the Cloud has emerged as a major new approach to the creation and supply of IT products and services. Hence the decision to change one small letter in our awards; but, in doing so, we believe that we’ve created a set of awards that are of much bigger significance to the IT industry.
The SDC (Storage, Digitalisation + Cloud) Awards – the new name for Angel Business Communications’ IT awards, which are now firmly focused on recognising and rewarding success in the products and services that are the foundation for digital transformation!
Categories
The SDC Awards 2019 feature a number of categories, providing a wide range of options for organisations and individuals involved in the IT industry to participate.
Our editorial staff will only validate entries ensuring they have met the entry criteria outlined for each category. We will then announce the ‘shortlist’ to be voted on by the readers of the Digitalisation World stable of titles. Voting takes place in October and November. The selection of winners is based solely on the public votes received. The winners will be announced at a gala evening event at London’s Millennium Gloucester Hotel on 27 November 2019.
Vendor Channel Program of the Year
Managed Services Provider Innovation of the Year
IT Systems Distributor of the Year
IT Systems Reseller/Managed Services Provider of the Year
If you’d like to enter your company for one or more category, you can do so at:
Robotic process automation (RPA) software revenue grew 63.1% in 2018 to $846 million, making it the fastest-growing segment of the global enterprise software market, according to Gartner, Inc. Gartner expects RPA software revenue to reach $1.3 billion in 2019.
“The RPA market has grown since our last forecast, driven by digital business demands as organizations look for ‘straight-through’ processing,” said Fabrizio Biscotti, research vice president at Gartner. “Competition is intense, with nine of the top 10 vendors changing market share position in 2018.”
The top-five RPA vendors controlled 47% of the market in 2018. The vendors ranked sixth and seventh achieved triple-digit revenue growth (see Table 1). “This makes the top-five ranking appear largely unsettled,” Mr. Biscotti added.
Table 1: RPA Software Market Share by Revenue, Worldwide (Millions of Dollars)
2017 Rank | 2018 Rank | Company | 2017 Revenue | 2018 Revenue | 2017-2018 Growth (%) | 2018 Market Share (%) |
5 | 1 | UiPath | 15.7 | 114.8 | 629.5 | 13.6 |
1 | 2 | Automation Anywhere | 74.0 | 108.4 | 46.5 | 12.8 |
3 | 3 | Blue Prism | 34.6 | 71.0 | 105.0 | 8.4 |
2 | 4 | NICE | 36.0 | 61.5 | 70.6 | 7.3 |
4 | 5 | Pegasystems | 28.9 | 41.0 | 41.9 | 4.8 |
8 | 6 | Kofax | 10.4 | 37.0 | 256.6 | 4.4 |
11 | 7 | NTT-AT | 4.9 | 28.5 | 480.9 | 3.4 |
6 | 8 | EdgeVerve Systems | 15.7 | 20.5 | 30.1 | 2.4 |
7 | 9 | OpenConnect | 15.2 | 16.0 | 5.3 | 1.9 |
9 | 10 | HelpSystems | 10.2 | 13.7 | 34.3 | 1.6 |
|
| Others | 273.0 | 333.8 | 22.2 | 39.4 |
|
| Total | 518.8 | 846.2 | 63.1 | 100.0 |
Due to rounding, numbers may not add up precisely to the totals shown
Source: Gartner (June 2019)
North America continued to dominate the RPA software market, with a 51% share in 2018, but its share dropped by 2 percentage points year over year. Western Europe held the No. 2 position, with a 23% share. Japan came third, with adoption growth of 124% in 2018. “This shows that RPA software is appealing to organizations across the world, due to its quicker deployment cycle times, compared with other options such as business process management platforms and business process outsourcing,” said Mr. Biscotti.
Digital Transformation Efforts Drive RPA Adoption
Although RPA software can be found in all industries, the biggest adopters are banks, insurance companies, telcos and utility companies. These organizations traditionally have many legacy systems and choose RPA solutions to ensure integration functionality. “The ability to integrate legacy systems is the key driver for RPA projects. By using this technology, organizations can quickly accelerate their digital transformation initiatives, while unlocking the value associated with past technology investments,” said Mr. Biscotti.
Gartner expects the RPA software market to look very different three years from now. Large software companies, such as IBM, Microsoft and SAP, are partnering with or acquiring RPA software providers, which means they are increasing the awareness and traction of RPA software in their sizable customer bases. At the same time, new vendors are seizing the opportunity to adapt traditional RPA capabilities for digital business demands, such as event stream processing and real-time analytics.
“This is an exciting time for RPA vendors,” said Mr. Biscotti. “However, the current top players will face increasing competition, as new entrants will continue to enter a market whose fast evolution is blurring the lines distinguishing RPA from other automation technologies, such as optical character recognition and artificial intelligence.”
The future of the database market is in the Cloud
By 2022, 75% of all databases will be deployed or migrated to a cloud platform, with only 5% ever considered for repatriation to on-premises, according to Gartner, Inc. This trend will largely be due to databases used for analytics, and the SaaS model.
“According to inquiries with Gartner clients, organizations are developing and deploying new applications in the cloud and moving existing assets at an increasing rate, and we believe this will continue to increase,” said Donald Feinberg, distinguished research vice president at Gartner. “We also believe this begins with systems for data management solutions for analytics (DMSA) use cases — such as data warehousing, data lakes and other use cases where data is used for analytics, artificial intelligence (AI) and machine learning (ML). Increasingly, operational systems are also moving to the cloud, especially with conversion to the SaaS application model.”
Gartner research shows that 2018 worldwide database management system (DBMS) revenue grew 18.4% to $46 billion. Cloud DBMS revenue accounts for 68% of that 18.4% growth — and Microsoft and Amazon Web Services (AWS) account for 75.5% of the total market growth. This trend reinforces that cloud service provider (CSP) infrastructures and the services that run on them are becoming the new data management platform.
Ecosystems are forming around CSPs that both integrate services within a single CSP and provide early steps toward intercloud data management. This is in distinct contrast to the on-premises approach, where individual products often serve multiple roles but rarely offer their own built-in capabilities to support integration with adjacent products within the on-premises deployment environment. While there is some growth in on-premises systems, this growth is rarely from new on-premises deployments; it is generally due to price increases and forced upgrades undertaken to avoid risk.
“Ultimately what this shows is that the prominence of the CSP infrastructure, its native offerings, and the third-party offerings that run on them is assured,” said Mr. Feinberg. “A recent Gartner cloud adoption survey showed that of those on the public cloud, 81% were using more than one CSP. The cloud ecosystem is expanding beyond the scope of a single CSP — to multiple CSPs — for most cloud consumers.”
Leveraging the automation continuum is security and risk management leaders’ latest imperative in creating and preserving value at their organization, according to Gartner, Inc.
Katell Thielemann, research vice president at Gartner, explained recently to an audience of more than 3,500 security and risk management professionals at the Gartner Security and Risk Management Summit, that the automation continuum emerging in the security and risk landscape is one where new mindsets, practices and technologies are converging to unlock new capabilities. Using automation in areas of identity, data, and new products and services development were identified as three critical areas for the security and risk enterprise.
“We are no longer asking the singular question of how we’re managing risk and providing security to our organization. We’re now being asked how we’re helping the enterprise realize more value while assessing and managing risk, security and even safety. The best way to bring value to your organizations today is to leverage automation,” said Ms. Thielemann.
Automation is All Around Us
Automation is already all around us — and it is starting to impact the security and risk world in two ways:
“Automation follows a continuum of sophistication and complexity, and can use a number of techniques, either stand-alone or in combination,” said David Mahdi, senior research director at Gartner. “For example, robotic process automation currently works best in task-centric environments, but process automation is evolving to increasingly powerful bots, and eventually to autonomous process orchestration.”
By 2021, 17% of the average organization’s revenue will be devoted to digital business initiatives, and by 2022, content creators will produce more than 30% of their digital content with the aid of AI content-generation techniques.
“What this means to security and risk management professionals is that our organizations are likely building solutions and making technology-related choices often without realizing the risk implications of what they are doing,” said Mr. Mahdi.
Balancing Emerging Technologies and People
“Automation is just the beginning. Emerging technologies will change everything and impact security and risk directly,” said Beth Schumaecker, director, advisory at Gartner. “Our reliance on data is ever increasing, yet it poses one of the largest privacy risks to organizations. In the next two years, half of large industrial companies will use some emerging form of digital twins, which will also need to be secured.”
The demands of these emerging technologies and digital transformation introduce new talent challenges for the security function, altering how organizations expect security to be delivered.
“Digital transformation demands that security staff play a wider range of roles, from strategic consultants to threat profilers to product managers, which in turn require new skills and competencies,” said Ms. Schumaecker. “It’s already impossible to fill all our existing vacancies.”
Mission-Critical Areas in Automation
The three mission-critical areas in today’s enterprises are automation in identity, data and new products or services development:
Interest in blockchain continues to be high, but there is still a significant gap between the hype and market reality. Only 11% of CIOs indicated they have deployed or are in short-term planning with blockchain, according to the Gartner, Inc. 2019 CIO Agenda Survey of more than 3,000 CIOs. This may be because the majority of projects fail to get beyond the initial experimentation phase.
“Blockchain is currently sliding down toward the Trough of Disillusionment in Gartner’s latest ‘Hype Cycle for Emerging Technologies,’” said Adrian Leow, senior research director at Gartner. “The blockchain platforms and technologies market is still nascent and there is no industry consensus on key components such as product concept, feature set and core application requirements. We do not expect that there will be a single dominant platform within the next five years.”
To successfully conduct a blockchain project, it is necessary to understand the root causes for failure. Gartner has identified the seven most common mistakes in blockchain projects and how to avoid them.
No. 1: Misunderstanding or Misusing Blockchain Technology
Gartner has found that the majority of blockchain projects are solely used for recording data on blockchain platforms via decentralized ledger technology (DLT), ignoring key features such as decentralized consensus, tokenization or smart contracts.
“DLT is a component of blockchain, not the whole blockchain. The fact that organizations are so infrequently using the complete set of blockchain features prompts the question of whether they even need blockchain,” Mr. Leow said. “It is fine to start with DLT, but the priority for CIOs should be to clarify the use cases for blockchain as a whole and move into projects that also utilize other blockchain components.”
No. 2: Assuming the Technology Is Ready for Production Use
The blockchain platform market is huge and largely composed of fragmented offerings that try to differentiate themselves in various ways. Some focus on confidentiality, some on tokenization, others on universal computing. Most are too immature for large-scale production work that comes with the accompanying and requisite systems, security and network management services.
However, this will change within the next few years. CIOs should monitor the evolving capabilities of blockchain platforms and align their blockchain project timeline accordingly.
No. 3: Confusing a Protocol With a Business Solution
Blockchain is a foundation-level technology that can be used in a variety of industries and scenarios, ranging from supply chain over management to medical information systems. It is not a complete application as it must also include features such as user interface, business logic, data persistence and interoperability mechanisms.
“When it comes to blockchain, there is the implicit assumption that the foundation-level technology is not far removed from a complete application solution. This is not the case. It helps to view blockchain as a protocol to perform a certain task within a full application. No one would assume a protocol can be the sole base for a whole e-commerce system or a social network,” Mr. Leow added.
No. 4: Viewing Blockchain Purely as a Database or Storage Mechanism
Blockchain technology was designed to provide an authoritative, immutable, trusted record of events arising out of a dynamic collection of untrusted parties. This design model comes at the price of database management capabilities.
In its current form, blockchain technology does not implement the full “create, read update, delete” model that is found in conventional database management technology. Instead, only “create” and “read” are supported. “CIOs should assess the data management requirement of their blockchain project. A conventional data management solution might be the better option in some cases,” Mr. Leow said.
No. 5: Assuming That Interoperability Standards Exist
While some vendors of blockchain technology platforms talk about interoperability with other blockchains, it is difficult to envision interoperability when most platforms and their underlying protocols are still being designed or developed.
Organizations should view vendor discussions regarding interoperability as a marketing strategy. It is supposed to benefit the supplier’s competitive standing but will not necessarily deliver benefits to the end-user organization. “Never select a blockchain platform with the expectation that it will interoperate with next year’s technology from a different vendor,” said Mr. Leow.
No. 6: Assuming Smart Contract Technology Is a Solved Problem
Smart contracts are perhaps the most powerful aspect of blockchain-enabling technologies. They add dynamic behavior to transactions. Conceptually, smart contracts can be understood as stored procedures that are associated with specific transaction records. But unlike a stored procedure in a centralized system, smart contracts are executed by all nodes in the peer-to-peer network, resulting in challenges in scalability and manageability that haven’t been fully addressed yet.
Smart contract technology will still undergo significant changes. CIOs should not plan for full adoption yet but run small experiments first. This area of blockchain will continue to mature over the next two or three years.
No. 7: Ignoring Governance Issues
While governance issues in private or permissioned blockchains will usually be handled by the owner of the blockchain, the situation is different with public blockchains.
“Governance in public blockchains such as Ethereum and Bitcoin is mostly aimed at technical issues. Human behaviors or motivation are rarely addressed. CIOs must be aware of the risk that blockchain governance issues might pose for the success of their project. Especially larger organizations should think about joining or forming consortia to help define governance models for the public blockchain,” Mr. Leow concluded.
The number of devices connected to the Internet, including the machines, sensors, and cameras that make up the Internet of Things (IoT), continues to grow at a steady pace. A new forecast from International Data Corporation (IDC) estimates that there will be 41.6 billion connected IoT devices, or "things," generating 79.4 zettabytes (ZB) of data in 2025.
As the number of connected IoT devices grows, the amount of data generated by these devices will also grow. Some of this data is small and bursty, indicating a single metric of a machine's health, while large amounts of data can be generated by video surveillance cameras using computer vision to analyze crowds of people, for example. There is an obvious direct relationship between all the "things" and the data these things create. IDC projects that the amount of data created by these connected IoT devices will see a compound annual growth rate (CAGR) of 28.7% over the 2018-2025 forecast period. Most of the data is being generated by video surveillance applications, but other categories such as industrial and medical will increasingly generate more data over time.
"As the market continues to mature, IoT increasingly becomes the fabric enabling the exchange of information from 'things', people, and processes. Data becomes the common denominator – as it is captured, processed, and used from the nearest and farthest edges of the network to create value for industries, governments, and individuals' lives," said Carrie MacGillivray, group vice president, IoT, 5G and Mobility at IDC. "Understanding the amount of data created from the myriad of connected devices allows organizations and vendors to build solutions that can scale in this accelerating data-driven IoT market."
"Mankind is on a quest to digitize the world and a growing global DataSphere is the result. The world around us is becoming more 'sensorized,' bringing new levels of intelligence and order to personal and seemingly random environments, and Internet of Things devices are an integral part of this process," said David Reinsel, senior vice president, IDC's Global DataSphere. "However, with every new connection comes a responsibility to navigate and manage new security vulnerabilities and privacy concerns. Companies must address these data hazards as they advance new levels of efficiency and customer experience."
While it's not surprising to see industrial and automotive equipment represent the largest opportunity of connected "things," IDC expects to see strong adoption of household (e.g., smart home) and wearable devices in the near term. Over the longer term, however, with public safety concerns, decreasing camera costs, and higher bandwidth options available (including the deployment of 5G networks offering low latency, dense coverage, and high bandwidth), video surveillance will grow in adoption at a rapid rate. Drones, while still early in adoption today, show great potential to access remote or hard to reach locations and will also be a big driver of data creation using cameras.
While the video surveillance category will drive a large share of the IoT data created, the industrial and automotive category will see the fastest data growth rates over the forecast period with a CAGR of 60%. This is the result of the increasing number of "things" (other than video surveillance cameras) that are capturing data continuously as well as more advanced sensors capturing more (and richer) metrics or machine functions. This rich data includes audio, image, and video. And, where analytics and artificial intelligence are magnifying data creation beyond just the data capture, data per device is growing at a faster pace than data per video surveillance camera.
It should also be noted that the IoT metadata category is a growing source of data to be managed and leveraged. IoT metadata is essentially all the data that is created about other IoT data files. While not having a direct operational or informational function in a specific data category (like industrial or video surveillance), metadata provides the information about the data files captured or created by the IoT device. Metadata, compared with original source files like a video image, is very small, sometimes by orders of magnitude. In other cases, however, metadata can mimic the size of the source file, such as in manufacturing environment. In all cases, metadata is valuable data that can be leveraged to inform intelligent systems, drive personalization, or bring context to seemingly random scenarios or data sets. In other words, metadata is a prime candidate to be fed into NoSQL databases like MongoDB to bring structure to unstructured content or fed into cognitive systems to bring new levels of understanding, intelligence, and order to outwardly random environments.
There isn't enough computing capability today to process the amount of data being created and stored. A new study from International Data Corporation (IDC) finds that the processing and transformation required to convert the data into useful and valuable insights for today's organizations and a new class of workloads must scale faster than Moore's law ever predicted.
To address this gap, the computing industry is taking a new path that leverages alternative computing architectures like DSPs, GPUs, FPGAs for acceleration and offloading of computing tasks in order to limit the tax on the general-purpose architecture in the system. These other architectures have been key to the enablement of artificial intelligence, including the growing use of deep learning models. At the edge; DSPs, FPGAs, and optimized architecture blocks in SoCs have been more suitable in initial inference applications for robotics, drones, wearables, and other consumer devices like voice-assisted speakers.
The IDC study, How Much Compute Is in the World and What It Can/Can't Do? (IDC # US44034119), is part of IDC's emerging Global DataSphere program, which sizes and forecasts data creation, capture, and replication across 70 categories of content-creating things — including IoT devices. The data is then categorized into the types of data being created to understand various trends in data usage, consumption, and storage.
The study builds on over twenty years of extensive work in the embedded and computing areas of research at IDC, including leveraging an embedded market model covering about 300 system markets and the key underlying technologies that enable the value of a system. The study analyzes the shift in the computing paradigm as artificial intelligence (AI) moves from the datacenter to the edge and endpoint, expanding the choices of computing architectures for each system market as features and optimizations are mapped closer to workloads.
For decades, advancements in process technology, silicon design, and the industry's dedication to Moore's Law predicted the performance gains of microprocessors and transistor functionality and integration in system on chips (SoCs). These advancements have been instrumental in establishing the cadence of growth and scale of client computing, smartphones, and cloud infrastructure.
Microprocessors have been at the very core of computing and today Intel, AMD, and ARM are the bellweather for the cadence of computing. However, the story does not end there; we are at the beginning of a large market force as AI becomes more ubiquitous across a broad base of industries and drives intelligence and Inferencing to the edge.
"AI technology will continue to play a critical role in redefining how computing must be implemented in order to meet the growing diversity of devices and applications," said Mario Morales, program vice president for Enabling Technologies and Semiconductors at IDC. "Vendors are at the start of their business transformation and what they need from their partners is no longer just products and technology. To address the IoT and endpoint opportunity, performance must always find a balance with power and efficiency. Moving forward, vendors and users will require roadmaps and not just chips. This is a fundamental change for technology suppliers in the computing market and only those who adapt will remain relevant."
A new forecast from the International Data Corporation (IDC) Worldwide Semiannual Smart Cities Spending Guide shows global spending on smart cities initiatives will reach $189.5 billion in 2023. The top priorities for these initiatives will be resilient energy and infrastructure projects followed by data-driven public safety and intelligent transportation. Together, these priority areas will account for more than half of all smart cities spending throughout the 2019-2023 forecast.
"In the latest release of IDC's Worldwide Smart Cities Spending Guide, we expanded the scope of our research to include smart ecosystems, added detail for digital evidence management and smart grids for electricity and gas, and expanded our cities dataset to include over 180 named cities," said Serena Da Rold, program manager in IDC's Customer Insights & Analysis group. "Although smart grid and smart meter investments still represent a large share of spending within smart cities, we see much stronger growth in other areas, related to intelligent transportation and data-driven public safety, as well as platform-related use cases and digital twin, which are increasingly implemented at the core of smart cities projects globally."
The use cases that will experience the most spending over the forecast period are closely aligned with the leading strategic priorities: smart grid, fixed visual surveillance, advanced public transportation, smart outdoor lighting, and intelligent traffic management. These five use cases will account for more than half of all smart cities spending in 2019, although their share will decline somewhat by 2023. The use cases that will see the fastest spending growth over the five-year forecast are vehicle-to-everything (V2X) connectivity, digital twin, and officer wearables.
Singapore will remain the top investor in smart cities initiatives, driven by the Virtual Singapore project. New York City will have the second largest spending total this year, followed by Tokyo and London. Beijing and Shanghai were essentially tied for the number 5 position and spending in all these cities is expected to surpass the $1 billion mark in 2020.
On a regional basis, the United States, Western Europe, and China will account for more than 70% of all smart cities spending throughout the forecast. Japan and the Middle East and Africa (MEA) will experience the fastest growth in smart cities spending with CAGRs of around 21%.
"We are excited to present our continued expansion of this deep dive into the investment priorities of buyers in the urban ecosystem, with more cities added to our database of smart city spending and new forecasts that show the expanded view of smart cities, such as Smart Stadiums and Smart Campuses," said Ruthbea Yesner, vice president of IDC Government Insights and Smart Cities programs. "As our research shows, there is steady growth across the globe in the 34 use cases we have sized and forecast."
Public Cloud Services spending to more than double by 2023
Worldwide spending on public cloud services and infrastructure will more than double over the 2019-2023 forecast period, according to the latest update to the International Data Corporation (IDC) Worldwide Semiannual Public Cloud Services Spending Guide. With a five-year compound annual growth rate (CAGR) of 22.3%, public cloud spending will grow from $229 billion in 2019 to nearly $500 billion in 2023.
"Adoption of public (shared) cloud services continues to grow rapidly as enterprises, especially in professional services, telecommunications, and retail, continue to shift from traditional application software to software as a service (SaaS) and from traditional infrastructure to infrastructure as a service (IaaS) to empower customer experience and operational-led digital transformation (DX) initiatives," said Eileen Smith, program director, Customer Insights and Analysis.
Software as a Service (SaaS) will be the largest category of cloud computing, capturing more than half of all public cloud spending in throughout the forecast. SaaS spending, which is comprised of applications and system infrastructure software (SIS), will be dominated by applications purchases. The leading SaaS applications will be customer relationship management (CRM) and enterprise resource management (ERM). SIS spending will be led by purchases of security software and system and service management software.
Infrastructure as a Service (IaaS) will be the second largest category of public cloud spending throughout the forecast, followed by Platform as a Service (PaaS). IaaS spending, comprised of servers and storage devices, will also be the fastest growing category of cloud spending with a five-year CAGR of 32.0%. PaaS spending will grow nearly as fast (29.9% CAGR) led by purchases of data management software, application platforms, and integration and orchestration middleware.
Three industries – professional services, discrete manufacturing, and banking – will account for more than one third of all public cloud services spending throughout the forecast. While SaaS will be the leading category of investment for all industries, IaaS will see its share of spending increase significantly for industries that are building data and compute intensive services. For example, IaaS spending will represent more than 40% of public cloud services spending by the professional services industry in 2023 compared to less than 30% for most other industries. Professional services will also see the fastest growth in public cloud spending with a five-year CAGR of 25.6%.
On a geographic basis, the United States will be the largest public cloud services market, accounting for more than half the worldwide total through 2023. Western Europe will be the second largest market with nearly 20% of the worldwide total. China will experience the fastest growth in public cloud services spending over the five-year forecast period with a 49.1% CAGR. Latin America will also deliver strong public cloud spending growth with a 38.3% CAGR.
Very large businesses (more than 1000 employees) will account for more than half of all public cloud spending throughout the forecast, while medium-size businesses (100-499 employees) will deliver around 16% of the worldwide total. Small businesses (10-99 employees) will trail large businesses (500-999 employees) by a few percentage points while the spending share from small offices (1-9 employees) will be in the low single digits. All the company size categories except for very large businesses will experience spending growth greater than the overall market.
Spending on customer experience (CX) was reported at $97 billion in 2018 and is expected to increase to $128 billion by 2022, growing at a healthy 7% five-year CAGR, according to the International Data Corporation (IDC) Worldwide Semiannual Customer Experience Spending Guide. The European industries spending the most on CX in 2019 will be banking, retail, and discrete manufacturing. Together, these verticals will absorb 33% of the European CX spend this year. Retail will also have the fastest growing spend on CX throughout 2022, outgrowing banking by 2021.
Customer care and support, digital marketing, and order fulfillment are the use cases with the highest spending in CX today and will continue to be a strong investment area throughout 2022. Investing in CX represents a clear opportunity for industries to differentiate, implementing these use cases to mold a public brand perception around the customer, improving websites, social media interactions, and product and service promotions. Looking at long-term opportunities, omni-channel content will be the fastest growing CX use case by 2022, with European companies focusing on this space to build organizational experience delivery competency, leveraging investments in content and experience design, to lower the cost of supporting new channels and ensure brand consistency. Omni-channel content reflects the core foundation of the future of CX through the optimization of content across channels at every point in the customer journey, creating a non-linear experience around the user.
Emerging technologies (such as AI, IoT, and ARVR) and hyper-micro personalization are fueling investments in CX together with rising customer expectations, intensified competition, ever-changing customer behaviors, and stronger demand for personalization. The innovations in CX are about centering the experience of a product or service around the user, approaching each customer as an individual in real time and moving away from segment-based approaches to customer engagement.
"Customer Experience is the top business priority for European companies in 2019," said Andrea Minonne, senior research analyst, IDC Customer Insight & Analysis in Europe. "Businesses are moving from traditional ways of reaching out to customers and are embracing more digitized and personalized approaches to delivering empathy where the focus is on constantly learning from customers. As a customer-facing industry, retail spend on CX is moving fast as retailers have fully understood how important it is to embed CX in their business strategy."
Worldwide spending on augmented reality and virtual reality (AR/VR) is forecast to reach $160 billion in 2023, up significantly from the $16.8 billion forecast for 2019. According to the International Data Corporation (IDC) Worldwide Semiannual Augmented and Virtual Reality Spending Guide, the five-year compound annual growth rate (CAGR) for AR/VR spending will be 78.3%.
IDC expects much of the growth in AR/VR spending will be driven by accelerating investments from the commercial and public sectors. The strongest spending growth over the 2019-2023 forecast period will come from the financial (133.9% CAGR) and infrastructure (122.8% CAGR) sectors, while the manufacturing and public sectors will follow closely. In comparison, consumer spending on AR/VR is expected to deliver a five-year CAGR of 52.2%.
"A growing number of companies are turning to virtual reality as a way to drive training, collaboration, design, sales, and numerous other use cases," said Tom Mainelli, group vice president, Devices and Consumer Research at IDC. "Augmented reality uses cases are also growing with a wide variety of companies leveraging next-generation hardware, software, and services to fundamentally change existing business processes and bringing new capabilities to first-line workers who require hands-free technology."
The commercial use cases that are forecast to receive the largest investments in 2023 are training ($8.5 billion), industrial maintenance ($4.3 billion), and retail showcasing ($3.9 billion). In comparison, the three consumer use cases for AR/VR (VR gaming, VR video/feature viewing, and AR gaming) are expected to see spending of $20.8 billion in 2023. The use cases that will see the fastest spending growth over the forecast period include AR for lab and field education, AR for public infrastructure maintenance, and AR for anatomy diagnostic.
Hardware will account for more than half of all AR/VR spending throughout the forecast, followed by software and services. Strong spending growth (189.2% CAGR) for AR viewers will make this the largest hardware category by the end of the forecast, followed by VR host devices. AR software will be the second fastest growing category, enabling it to overtake VR software spending by 2022. Services spending will be driven by strong growth from AR consulting services, AR custom application development, and AR systems integration.
"Augmented reality is gaining share in the commercial market due to its ability to facilitate tasks, provide access to resources, and solve complex problems," said Marcus Torchia, research director, Customer Insights & Analysis at IDC. "Industries such as manufacturing, utilities, telecommunications, and logistics are increasingly adopting AR for performing tasks such as assembly, maintenance, and repair."
On a geographic basis, China will see the largest AR/VR spending totals throughout the forecast, followed by the United States. The two countries will account for nearly three quarters of all spending worldwide by 2023. Western Europe will see the fastest growth in AR/VR spending with a five-year CAGR of 101.1%, followed by the U.S. and China.
Asks Bryan Betts, Principal Analyst, Freeform Dynamics Ltd.
Anyone seeking more economical and flexible IT now has a range of options. For example, on top of virtualisation we can add various software-defined technologies, including hyper-converged infrastructure (HCI) and ‘the cloud’ - especially the public/private combination known as hybrid cloud and the cross-platform combination called multi-cloud.
But do these options deliver what they promise? Perhaps more importantly, even if they do deliver, does it come at the expense of unwanted additional complexity? To get some answers to these questions, we dug into the results of a recent study by Freeform Dynamics (link: https://freeformdynamics.com/core-infrastructure-services/the-economics-of-application-platforms/), which highlights many of the expectations and experiences that data centre managers and other IT professionals have with modern approaches to IT provision.
In particular, it asked respondents which of these technologies they used, which aspects were important, and of course what challenges and results they had found with them.
If you expect flexibility from your IT investments, you’re in good company – it is an expectation that comes up time and again when we talk to data centre professionals. And in an age when you also consistently report being asked to make existing resources stretch further and “to do more with less”, it’s not surprising. After all, the more flexible an investment is, the more use you potentially can get out of it.
Of course, a key reason why IT needs to become more flexible is that it is required to support more flexible business practices. Modern enterprises need to be adaptive, able to respond faster to changes in the trading environment, whether those changes are social, economic, technological, regulatory or some combination of all those. That in turn pushes IT not just to be more flexible but to flex faster as well.
The widespread adoption of virtualisation (Figure 1), via tools such as VMware, Microsoft Hyper-V, Citrix Xen, KVM and others, was initially driven more by the desire to optimise resource usage and cut costs. However, it soon became clear that it also considerably improved IT flexibility. By abstracting the software from the hardware, it allowed virtualised servers to be replicated, backed-up or moved to a new hardware host, and for a single physical server to host multiple virtual machines.
It appears though that for many organisations, while what we might call ‘traditional virtualisation’ is still broadly used, its role as the primary delivery platform is declining. Much of the new demand for flexible delivery is instead being picked up by more modern alternatives that use virtualisation just as an enabling technology. In particular that means public cloud infrastructure and platform-as-a-service options, and integrated multi-/hybrid cloud environments.
Hybrid cloud combines traditional ideas of on-prem IT – which are familiar and comfortable to data centre professionals, auditors and regulators alike – with access to dynamic and elastic public resources. This may explain why hybrid cloud, for which HCI is often employed as the local/private element, shows more growth than local HCI on its own.
Confirming this point, more than two-thirds of the study’s respondents rated a range of modern approaches as important (Figure 2). Significantly, all these approaches can run locally in a private cloud, remotely, or in a hybrid multi-cloud. Containers and serverless in particular are gateways to a borderless IT future, in which services have the potential to operate seamlessly across public or private infrastructure, whether locally or remotely-hosted.
The problem for many is that the public side suffers from a range of challenges around the cost and management of services and platforms (Figure 3). Most notably, two-thirds of organisations report or expect that employees make uncoordinated, inappropriate or unauthorised decisions on cloud services. Add the difficulty of right-sizing your on-premise systems when many employees prefer – rightly or wrongly – to use cloud resources instead, and the complexity of managing hybrid services is clear.
So what’s needed to make a success of that search for flexibility? What are we looking for if we purchase HCI and adopt a hybrid multi-cloud strategy? Some vendors might have you believe that the ability to move costs from capital expenditure (CapEx) to operational (OpEx) is key. However, while survey respondents said it was useful to be able to mix and match their payment and operational models, this was not high on their radar.
Instead, seamless portability is the primary objective, with the ability to run tasks locally or remotely, and easily migrate them between the two and between different cloud platforms in a multi-cloud (Figure 4).
An important note here is that multi-cloud does not simply mean having multiple cloud platforms in use – many organisations have found themselves in this position, but by accident rather than by design. Multi-cloud refers instead to the deliberate decision to employ multiple clouds – whether public or private, or both – in a coordinated or coherent way, with the ability to seamlessly migrate workloads and/or data between them.
An organisation’s IT infrastructure always tries to reflect business needs, but in reality it often lags behind both current technology and modern service expectations. It may therefore end up reflecting the needs of the past, rather than trying to keep up with those of the future. The challenge for IT going forward is that it needs to be able to deliver multiple services quickly in order to keep up with rapidly changing business demands. That in turn means it requires much more flexible systems.
HCI, which combines local hardware with virtualisation and a cloud-native approach, is clearly an interesting option here. It offers a degree of comfort and familiarity in that it can run today’s workloads, but it also has the flexibility to go beyond that. In particular, HCI will have a key role to play in many organisations as the local or private element in a well managed multi-cloud solution.
Tim Meredith, Chief Commercial Officer, OnApp, offers some advice and insights to companies looking to establish and/or develop cloud services. Simplicity, fluidity and profitability are the key watchwords.
1. Please can you provide a bit of background on OnApp – when and why formed, key personnel and the like?
We provide an end-to-end cloud management solution for Telcos, MSPs and other cloud service providers, and we’re unique in that we have always been focused 100% on that service provider market.
The first version of the OnApp Cloud platform was created for the launch of a new cloud brand at UK2, a large UK service provider that’s now part of The Hut Group. The team back then saw cloud as the future of pretty much all service delivery, and of course there was this bookshop called Amazon that had started making moves into cloud – so the OnApp platform was built to counter that.
Development started way back in 2008. It was so successful in its initial in-house incarnation that we decided to create a white label version that all service providers could use. OnApp v2 was launched in July 2010, and we’ve been helping MSPs, Telcos and hosts build their own successful cloud services ever since.
2. And what have been the key company milestones to date?
There have been many! OnApp in 2010 was pretty basic compared to what we have today. Cloud was just the start: in 2011 we brought our CDN platform to market, and by then we already had over 100 service providers running our software. In 2012, we introduced our own software-defined storage system.
In 2013, we brought bare metal into the platform, and our “smart servers” capability, and we had started connecting all of those OnApp cloud providers into a global network we call the OnApp Federation, and enabling providers to trade capacity with each other. By 2015 we had started bringing VMware clouds into the platform, along with containers, applications, custom scripting, governance and basically everything you need to take any flavour of cloud to market – public, private and hybrid.
Fast-forward to today, and we enable the full range of Infrastructure-as-a-Service capabilities for service providers, in one end-to-end solution with a single UI and API. We’re on version six of the platform now. We’ve deployed more than 5,000 clouds so far, and have hundreds of customers across 93 countries.
3. How would you define the OnApp USP – ie what does the company do that others don’t, or do differently?!
We’re unique because we enable service providers to do cloud, and we’ve come to that from being service providers ourselves. We do many things differently as a result, but if I were to try to boil it down to a manageable number of USPs, I think I’d focus on three things.
The first is simplicity. OnApp was conceived as a way to enable service providers to take cloud to market, way back in 2008. It was possible to do that already, of course, either by developing your own in-house or using something like OpenStack, but it was difficult and time-consuming, and expensive. OnApp was the first turnkey cloud solution and is still the only truly simple way to build a cloud that’s ready to sell.
The second is fluidity, which is another way to say, workload portability – the ability to move your applications in and out of the cloud, or between clouds. OnApp is unique in this regard: you can take any workload and move it between providers and locations in a couple of clicks, and that is a powerful thing to offer customers who worry about lock-in, or the changing political and legislative environment, or simply customers who want the option to choose different price and performance options in the future.
The third is profitability. OnApp is a service provider cloud platform, so we have concentrated on enabling our customers to monetize their cloud in whatever way best fits their customers. We have, I think, the most flexible billing and access control engine on the market. You have total control over how you package your services, how you charge for cloud resources, how much you give for free, what kind of billing cycle your customers want, and how you get that data into your BSS and OSS in the back office.
These things are especially important for MSPs and Telcos who have legacy systems that are great at dealing with proprietary call and event data, but not so much when integration is required with other chargeable services. With OnApp, cloud will play nicely with the back-office systems they’re using already.
4. Please can you talk us through the company’s product portfolio, starting with the OnApp Cloud?
I have already talked about the USPs, so I suppose I should give an overview of our cloud software platform from a product perspective instead. OnApp is a software product that runs on the majority of commodity datacenter hardware.
We’re hardware-agnostic, and hypervisor-agnostic: you can build a cloud with the server and storage hardware you use already. OnApp is a turnkey cloud enablement platform, as I’ve already mentioned. It would be easier to talk about what it doesn’t do, than what it does. We don’t have a client invoicing system, because you’re a service provider – so you already have one. As far as the cloud is concerned – or CDN – we take care of everything else. Deployment, management, provisioning, backup, access control, metering, billing, governance, monitoring and alerts – everything. Whether you need private public or hybrid cloud, everything is done through a single UI and one API.
5. And then there’s OnApp On Demand?
Yes, so the OnApp Cloud platform is perfect for infrastructure companies, those with their own datacenters or colo. However, that excludes a lot of companies who want to launch their own cloud service, but don’t want to have to manage the infrastructure.
On Demand gives you a “ready-to-sell” cloud in 72 hours, including cloud infrastructure and cloud management software, at a choice of 70 locations and providers including AWS. It’s all fully managed and supported by us, so you don’t need to be experts in cloud infrastructure or the cloud platform – we are.
Who is it for? For Telcos in particular, they’re all looking for a way to offer cloud solutions or edge compute solutions, to support their investment in 5G, IoT and other growth areas. On Demand enables Telco product managers to get a new cloud service to market, almost certainly within pre-approved budgets, and steal a march on the competition.
We’re also getting a lot of interest from consulting and professional services providers working at the application level, on service design – they aren’t cloud infrastructure experts, and they don’t want to be – but with On Demand they can offer a complete cloud infrastructure stack without needing to upskill or invest in their own hardware.
6. And you also enable companies to sell CDN?
After we launched the OnApp cloud management platform, CDN was the next most obvious problem area to focus on. Everybody needs better web performance, but most service providers deal with that by reselling somebody else’s CDN – which, in the best case scenario, means they are working with small reseller margins; and in the worst case, working with tiny margins while also handing their customers over to a third party.
Why? Because CDN requires an enormous investment in global infrastructure. To make sure web content is delivered from a location close to each user, you need edge servers everywhere.
We realised that we could already solve that problem, because we had a network of cloud providers and cloud infrastructure across the globe. Our CDN platform takes advantage of that. If you have your own network of datacenters, then great - you can use OnApp CDN to build your own CDN. If you don’t? Then you can use our network of clouds to get the geographic coverage you need.
You can build a standalone CDN with OnApp, or add OnApp CDN to our core cloud platform, or to VMware – so there is plenty of choice. You can even choose to deploy a private CDN, away from the public internet, to speed up the distribution of content in private WANs.
7. And the OnApp Accelerator?
Every service provider wants to offer some kind of CDN service, because they have so many customers running websites on their platform.
However, CDN is quite a specialist technology: a lot of providers who are expert in cloud or hosting don’t have the skillset to build and take their own CDN to market. Most of them just resell one of the big CDN providers instead, or something like CloudFlare, and sacrifice control and margins.
So, while we offer a full CDN solution for companies who understand how to design CDNs for streaming, video on demand, and those more complex content delivery requirements… we decided to develop a CDN solution for everybody else, too. That’s the OnApp Accelerator.
If you’re running an OnApp cloud, the Accelerator will take the content of your customer VMs – their websites running in your cloud – and automatically optimize it, compress it, and distribute it to our global CDN. It’s free, and it requires nothing more than a click of an “Accelerate” button in your VM control panel. For most web applications running in an OnApp cloud, it delivers a 100% performance improvement.
8. You mentioned the OnApp Federation – what is it?
The OnApp Federation is a technology built into the OnApp cloud platform. We developed it because we wanted to help service providers do three things. Firstly, to get the scale they need to compete with hyperscale clouds. So, for our customers, the Federation is a marketplace built into their cloud: using the marketplace you can subscribe to other OnApp clouds around the world, and make those remote clouds available to your customers alongside any of your own clouds. That gives you instant global reach for your cloud service, and instant scale you can use to respond to your customers’ RFPs.
Secondly, the Federation is a channel to market. If you have spare capacity on your local clouds, you can publish your locations to the marketplace, and get paid when other providers use them.
Thirdly, it’s a workload fluidity technology. The Federation enables workloads to move seamlessly between different clouds, so when your customer needs its application to be local to Australia – for example – you can deploy it there. When they ask if there’s a way to reduce cost, you can find a lower cost location and move their workload there.
9. And OnApp for VMware?
If you look at our cloud platform, we’re enabling companies to take cloud services to market based on open source hypervisors, and at first glance that might look like we’re competing with VMware – but in reality, we’re not. The reason there are so many VMware-focused cloud providers is that there are so many enterprises out there running their IT on VMware: it’s a different market.
Of course, that “do we compete/do we not compete” question came up in the conversations we were having with potential service provider customers, and we realised that rather than just being an alternative solution for those companies, in many cases OnApp was a really good fit with what they were doing already. VMware is an incredible product, but what we heard was that the OnApp approach to the UI, to that “single pane of glass” experience, and our approach to things like RBAC, billing and automation, was exactly what these VMware cloud providers needed
We launched OnApp for vCloud Director in 2014, and in 2018 we introduced OnApp for vCenter too. These solutions help VMware cloud providers make their clouds easier to use, easier to manage and easier to monetize – it’s like having the best of both worlds.
We’re even seeing these providers launch adjacent KVM cloud products alongside VMWare, because then they can offer an alternative for customers who want a different price point or perhaps a more cost-effective developer environment before migrating to production. We also see some partners specialise in different platforms for private, public and hybrid solutions, which are all possible with the combination OnApp allows. We’ve been working with VMware for about five years now, and they’re an amazing partner to have.
10. And finishing with services and support?
Cloud is about resilience and “always on” as much as it is about scalability, self-service, and so on. For MSPs, support from your cloud platform provider is not really optional. Since we launched, we have offered 24x7 global support with a 15 minute SLA to our customers: it’s too important to ignore, so we include it as standard.
In terms of services, professional services, we help service providers deploy, configure and optimize their clouds – and certify their teams, if they want. Most deployments are remote – we can do it for you, and get you cloud-ready in a couple of days – but we also offer on-site support and service if you need it. And as I have mentioned, if you don’t want to worry about the infrastructure side of things, we can take care of that for you as well.
11. Focusing on some industry issues, can we start with multi-Cloud management - an increasingly important topic?
Every company needs a mix of IT, a hybrid of public and private clouds, hosted cloud and on-premises cloud, bare metal and containers, CDN and more. Most companies also need a mix of these different kinds of infrastructure in different locations, because they want to optimize IT for cost, or performance, or for compliance and data sovereignty reasons.
As an MSP, or a host or Telco trying to design services for those very different needs, it can be difficult. There are plenty of multi-cloud management tools out there, but a lot of the providers we talk to have begun to experience their limitations, because they tend to work at a superficial level, an API level of integration.
As a result, they will usually give some level of visibility of multiple silos of IT – what you have running where, and how much it costs – but they don’t make it easy for companies to do anything useful with that information, because it’s still difficult to migrate workloads between different types of IT infrastructure.
We offer a bottom-up approach, a block-level, data-centric approach to multi-cloud. You can have workloads running on your own infrastructure, in a third party’s datacenter, on hyperscale infrastructure from AWS, even on your customer premises: you can offer whatever mix of infrastructure and public or private cloud your customers need, and bare metal, and containers - and because it’s running on OnApp, you can migrate those workloads at will, to suit each customer’s need for location, or security, or latency, or price, or data sovereignty.
12. And where are we right now with the KVM/Open Source situation?
I suppose it depends on what you mean. From an open source hypervisor perspective, we support both Xen and KVM, though our emphasis today is more on KVM. Historically Xen has been better for Windows workloads, but KVM has caught up and in many areas has surpassed Xen in the last couple of years: we recommend KVM for all customers now.
From an open source cloud platform perspective, we used to come up against OpenStack in sales situations, but today that is rarely the case. OpenStack has many merits, but I wouldn’t count simplicity or time-to-market among them. We’ve been successful by taking open source technologies where it makes sense to – I’ve already talked about Xen and KVM, and for example we use Open Daylight for software-defined networking – but our focus has been on providing a ready-to-sell cloud solution, rather than a framework of components that needs an army of engineers to coax into some kind of go-to-market proposition.
13. And we can’t ignore the buzz topic of the moment – edge?
Edge is one of those buzzwords that sometimes makes me laugh, because it’s not really a thing. It’s just a marketing term, like grid or fog. Edge is just cloud distributed to the locations you need for IoT, 5G apps, enterprise mobility and all those kinds of solution areas.
In the Telco world, I suspect edge has gained its own identity because Telcos have traditionally found it difficult and expensive to build cloud infrastructure: that’s partly because they are – let’s be honest – network businesses, not cloud businesses; and it’s partly because they’ve been trying to do cloud using open source frameworks, or partnering with hyperscale providers – both of which mean investing massively in development and engineering resources.
Edge computing is no different to cloud computing, and if it’s easy to spin up a cloud, then it’s easy to build new edge propositions. If you’re a Telco or MSP, that’s what we help you do. How you create the edge is not the problem you need to focus on – edge or cloud is a turnkey solution you can get up and running in a couple of days. Your focus should be on what you do with it, how you combine it with your new network and application propositions.
14. Which leads us on to intelligent automation?
In our unpredictable world, the ability to automate activities rapidly and intelligently is very important. Tasks that would have historically required significant human interaction now have the potential for automation, but we must be careful. The unpredictability of human behaviour calls for some complex science, so we need to try and understand what can, will and shouldn’t be automated. Critical healthcare and Netflix streaming both need some automation, but with different levels of human approval and intervention.
For example, we provide horizontal and vertical auto-scaling within OnApp. This is possible because we have reliable metrics reported and clearly definable performance boundaries that govern when thresholds are being breached and what action to perform or reverse in response. This is rarely a life-threatening decision, but important for businesses whose customers rely on speed and reliability of their cloud-based services in a digital age.
15. OnApp looks to provide MSPs and telcos with the ‘raw ingredients’ for establishing a Cloud Services Business – any advice for any of these companies who are starting out on this journey?
I’d say we offer a ready-to-cook meal rather than raw ingredients. There are plenty of different options for doing cloud, and obviously we think the OnApp approach is the best, but whatever approach you take I think the important advice for companies new to this space is probably to focus on three things.
Firstly, you have to abandon any notion you have of cloud being complex and expensive to do yourself. It’s a bit like the edge computing scenario I was just talking about. It really isn’t difficult to create your own cloud, whether you own infrastructure, or do not. It really isn’t. If your cloud project timescale is measured in months, it’s the wrong project: you can and should be selling your new service in weeks, if not days.
The question is how? Well, if your core expertise is in a vertical solution space, or networks, or perhaps a specific application space, that’s what you build your cloud service around. You’re not really launching some new standalone cloud business, you’re bringing cloud to what you do already, and adding value to your customers by combining those solutions with the simplicity, flexibility and scale of the cloud. So the infrastructure, the cloud deployment and management question, that isn’t what you should focus on.
Secondly, you need to think very hard before just going in wholesale with hyperscalers like AWS. Lock-in is a real danger, because customers will need to change their approach to cloud, and you need to be able to help them move to different infrastructure types and providers, different locations, different price and performance points.
And thirdly, of course, you’re going into business as a cloud service provider. So what’s your growth plan? A decade ago, “build it and they will come” may have had some merit. Today, you probably won’t succeed if you’re just launching another generic cloud service, another cloud storage solution. You already have customers: build your cloud around what you know they need today, but with the added value of removing the complexity of cloud from their businesses – and the benefits of having you, as their service provider, making sure their cloud is tailored for the kind of business they are.
16. Longer term, how do you see companies like yourselves co-existing alongside the hyperscalers who seem to be wanting to own everything?
I honestly think the backlash against hyperscale has already begun. It certainly has for the MSPs and Telcos we talk to. We have nothing against hyperscale infrastructure: if you go to Amazon you’re going to get high quality infrastructure and a big box of different services to experiment with. If you’re a developer, it’s like all your dreams have come true.
However, businesses aren’t run by developers. As an ex-developer it pains me to say that, but it’s true. There is this weird developer-centric view of the world that a lot of the analysts have bought into, or been bought into, and while it’s true that building your app, your start-up, on a hyperscale cloud is amazing while you’re building it… it usually becomes cripplingly expensive in production. Micro-services are great from a developer’s perspective, and they do make sense from a technical perspective, but they are also a way for these large cloud providers to obfuscate the total cost of the service you’re building. The first hit is cheap or free: long-term, you’re looking at a massive commitment.
While the production costs are crippling, the lock-in is even more so – so much so that successful companies like Dropbox were willing to drop 75 million dollars just to escape that ecosystem. Apple, Lyft, Pinterest are spending insane amounts of money for cloud infrastructure and won’t brook any other approach. We hear this sometimes from our customers, too – cloud providers who offer equal performance, the same scale, better support and better pricing, but whose clients have investors who insist on the Amazon juice because they have bought into the hype.
Hyperscale can be incredibly useful, but don’t believe the analysts: all companies should think, very carefully, about what they want not just now but in 5 years – and think about the relative merits of different approaches to cloud. Cloud is everywhere: there are thousands of cloud providers who offer better pricing, better support and more choice of location than any of the hyperscalers. There’s probably one in your city – and yes, there is a pretty good chance it runs on OnApp.
The key thing is to make sure your cloud infrastructure partner gives you the flexibility to adapt, and even move your workload, your application, when you need to.
17. In finishing, what one or two pieces of advice would you give to any organisation looking to set up a Cloud Services Business?
It’s really simple – be problem-centric. Understand the challenges, pains and needs of your customers or even your customers’ end-customers. If you want there to be value in your proposition, it must solve problems.
So often in our industry we find organisations and individuals quoting the old “they would have asked for a faster horse” Henry Ford saying, then using it as an excuse to create a solution looking for its problem. The motorcar was not an individual spark of inspiration, rather it was a culmination of many different needs like economy, safety, accessibility, housing availability and speed being addressed by mass production, combustion engines, new materials, roads and population sprawl. There was an incipient need that had no solution aligned yet and therefore a gap in the market the horse or train couldn’t fill.
Look for where needs aren’t being met today or in the future, find the gap in the market and understand how much of the problem you want to solve. You may need complementary solutions, like the car and the road, which require successful partnerships.
18. And, turning to end users, what do they need to look out for when choosing their Cloud supplier(s)?
Every user should think about price, performance, uptime, all of the usual things. But think about lock-in and support first and foremost. It looks good now, with your free instance: what will it look like when your app, your website or your business takes off?
Schneider Electric is developing an integrated, comprehensive edge ecosystem designed to provide key partners, the Channel and end users with the tools they need to take advantage of the opportunities offered by this key digital transformation building block. DW reports in the second of a two part article (the first article appeared in DW June).
Design and deployment at the edge
Picking up on the need to design and build a whole new breed for edge data centres, Schneider Electric’s Director of Systems Engineering and Technology, Johnathan Healey, outlined how the company’s Science Centre exists to both help customers make informed decisions based on the thought leadership and research content it develops and also as something of a catalyst within the organisation to explore new technologies and systems architectures. Hence the Centre produces a whole range of collateral, including white papers, reference designs, analysis, solution prototypes, trade-off tools and configurators. Much, if not all, of this content is publicly available, emphasising Schneider Electric’s desire not to ‘own everything’ but to help grow the overall market, meaning that there are many more opportunities for Schneider, its partners, (and competitors – the company is aware that competing vendors use some of the configurator tools, for example!).
In terms of the edge, the Science Centre has been working on simplifying edge deployments with rules-based design tools, and, as a result, has developed a Local Edge Configurator (LEC).
The emphasis is on the edge ecosystem
Lest anyone doubt Schneider Electric’s commitment to the edge (!), Jamie Bourassa’s job title – VP Edge Computing – only serves to emphasise just how seriously is this opportunity being taken by the company. And Jamie’s opening slide – contrasting an ‘old-fashioned’ fixed line telephone with the camera, headset, laptop, wifi and ISP all required to deliver a Skype call – was as elegant a statement as can be made when emphasising how, as he summarised: ‘Digitisation requires a new ecosystem that can handle more complex processes and provide richer experiences’.
Jamie went on to give real world examples of Auchan’s unmanned, automatic shopping kiosk, or Bingo Box, in China, and Ocado’s business model as to how end to end digital processes and experiences are driving business across all industry sectors. As Jamie explained: “An integrated ecosystem is critical for the success of edge and wider digital transformation. Consultants, vendors, systems integrators, service providers, IT and OT; standards, integrations, design tools and management systems all have a role to play.”
Schneider Electric sees itself as a key enabler of, as well as a major player in, such ecosystems, thanks in no small part to its 150,000 or so global partners, combined with the continuous development of advanced digital tools, like EcoStruxure.
Jamie gave brief details of a couple of customer success stories. In the first, the combination of Cisco UCS certification, the local edge configurator tool and the integration centre produced a certified and tested converged IT solution. For the partner involved in the project, there was a 35 percent reduction in engineering costs (thanks to the use of standardised tools), and a 7 percent reductions in maintenance. For the customer, the main benefit is a 50 percent increase in deployment speed without risk to any of the IT equipment warranties.
In the second example, direct MSP platform integrations, EcoStruxure IT and Managed Service Practice resulted in the cost-effective management of IT assets in unmanned locations. The partner’s return was a 2x increase in lifecycle gross margins. The main customer benefit is an increase in availability and the reduction of cost associated with IT staffing.
The ecosystem in action
Picking up on the Channel aspect of the Schneider Electric ecosystem, Nick Ewing, Managing Director of EfficiencyIT, outlined how his relatively young company is being helped to take advantage of the edge/digital transformation opportunity. Nick says: “We have watched the market change dramatically and it continues to change and adapt to new ecostructures – cloud, hybrid, edge, basically ‘anything as a service’. There’s been a marked impact on how technology is delivered, with the demand for large on-premise data centres and server rooms reducing, while the demand for smaller, more resilient edge IT environments is increasing. The hybrid model is taking effect, so both on-premise comms and server rooms and edge data centres are going to be around for quite a while.”
Nick continues: “Our mission is to help customers to navigate this developing landscape, supporting them in their decision-making when deciding what to keep on-premise, what to migrate to cloud and what to place in colocation space to optimise the ways IT can support the business objective.”
For Nick, Schneider Electric’s overall data centre portfolio is helping customers to have certainty about the increased reliability, manageability and predictability of their IT services. As a result, EfficiencyIT has achieved above target growth, increased business with existing customers, developed new customers, reduced the cost of customer acquisition and built out its service offering.
Easy to say, perhaps, but much better to demonstrate? Enter EfficiencyIT customer, Simon Binley, data centre manager at the Wellcome Trust Sanger Institute (WTSI).
The WTIS is one of the premier of genomic discovery and understanding in the world. The ability to conduct large-scale, high throughput genomic studies enables researchers to play leading roles in national and international projects spanning cancer, infectious diseases, human epidemiology and developmental disorders.
In simple terms, while the complexity of genome sequencing and mapping remains the same, the speed at which this work can be carried out has improved dramatically since the first human genome was mapped 25 years ago - opening up new possibilities for research work. For example, The Earth BioGenome Project is a quest to sequence the genomes of nothing less than all life on earth. This knowledge of the natural world will form a foundation for future biotechnology.
The importance of IT
Underpinning the work of the WTSI is a high throughput computing infrastructure, with 36,000 cores of compute in a massively parallel server farm. IT outages can cause major disruption to the large scale data processing activities this infrastructure enables.
WTSI’s 4 MW data centre is one of the largest edge data centres out there – the heavy demands of data generation and analytics require compute at the point of data generation. Outputting approximately two terabytes of data per day, the genomic sequencing activities to date have created 55 petabytes of data, which can never be deleted and has to be available 24x7.
Recently, Simon has been overseeing the development of the WTSI data centre footprint, with a disaster recovery site at the Wellcome Trust’s Euston headquarters building, and a link to a colocation data centre in Slough – increased resilience being a primary driver.
Additional challenges faced by Simon are the managing of the data centre infrastructure by only three staff members and WTSI’s concern at the carbon footprint of its compute activities (the data centre is the largest energy user within the organisation). Any energy efficiency improvements will result in OPEX reduction, with the money saved being made available for research.
Schneider Electric equipment on site includes:
Customer benefits
In working with both Efficiency IT and Schneider Electric, WTSI has realised a range of benefits. The management software provides visibility of infrastructure, energy use and service requirements and has as helped validate the data centre development undertaken by ISG. Information is delivered to Simon by mobile phone app and EfficiencyIT delivers him a quarterly report, which covers visibility of core assets and performance, visibility of energy use, especially where use is too high, and visibility of maintenance requirements; predictive assessment of EOL requirements of equipment and components such as batteries.
This means that WTSI to meet its IT, energy and data centre team efficiency and reliability challenges. In turn, this means that the Institute’s primary role – genomic research – can be carried out in an optimised, reliable environment.
The very final piece of the edge jigsaw came in the form of a visit to some key assets of Care New England (CNE) in Providence, Rhode Island. CNE has over 7,000 employees, including 1200 medical staff, carries out over 17,500 surgical procedures each year, ‘oversees’ 9,000+ births, 104, 000+ emergency room visits and over 193,000 inpatient days. The organisation spends over $34,000 research dollars and provides over $13 million of uncompensated care.
The IT infrastructure which supports these key activities is housed in a range of equipment cabinets and closets across multiple sites. Infrastructure consolidation has been considered but is seen as possibly lessening the resiliency of the infrastructure, so the IT and data centre teams have to manage many locations across several sites.
Living in Western Europe, where power reliability is pretty much a given and where extreme weather conditions are unusual (although increasing), it’s perhaps difficult to fully appreciate the importance of the business continuity imperative. Rhode Island experiences severe storm conditions, and not a little snow in the winter, all of which means power reliability is far from guaranteed.
So, whether it’s doctors scanning expectant mothers or working on research projects, Schneider Electric UPS systems provide a vital power backup solution. One doctor described in vivid detail how, during and in the aftermath of an extreme weather power outage, he was able to carry on meeting and consulting with patients, thanks to the UPS system providing the power to his scanning machine.
Monitoring and maintaining multiple UPS systems across multiple edge is no easy task, but it was obvious that the CNE data centre team owed their relative peace of mind to Schneider Electric’s technology – both the hardware and software.
The edge is central to the future
In any industry sector, there are the leaders and early adopters, with the bulk of companies happy to follow on behind. When it comes to the data centre space, the potential for edge computing and the wider digital transformation opportunity, there’s little doubt that Schneider Electric is taking a lead. The company would not claim to have all the answers, but is committed to helping its customers to find them, especially when it comes to developing and providing what might be termed ‘best-of-breed’ ecosystem solutions. As the edge becomes more and more central to more and more businesses over the next few years, expect Schneider Electric’s role in this digital transformation building block to increase dramatically.
At the recent Hannover Trade Fair, Schneider Electric introduced seven new EcoStruxure “Advisor” apps and services that give users value around safety, reliability, efficiency, sustainability, and connectivity as they embark on their digitisation journey. Those relevant to the data centre space incude:
EcoStruxure Power Advisor - gives power managers deep insights into data quality and network health. EcoStruxure Power Advisor combines expert advice with advanced algorithms to identify gaps or issues in power management systems, as well as power quality issues within a larger electrical distribution system. Power Advisor allows users to track and analyse equipment conditions, manage electrics capacity to ensure flexibility, and get advanced warnings remotely. Its network analytics give managers real-time information on data quality through insights and recommendations that establish a trustworthy data foundation, while electrical network health system summaries and trending analytics improve awareness of electrical network issues.
EcoStruxure IT Advisor - gives data centre managers a complete overview of assets and performance. EcoStruxure IT Advisor is highly secure planning and modelling software that provides data centre professionals with an instant overview of their data centre operations, helping them optimise their capacities, plan changes and analyse business impact, automate workflow, and deploy energy-based billing to reduce OpEx and increase ROI. IT Advisor gives users an overview of the physical location of assets thanks to a floor view with areas, cages and racks, and IT assets. It improves capacity optimisation by ensuring that physical infrastructure provides the needed redundancy, backup time and availability, and allows for better change management thanks to real-time analysis of data centre infrastructure that highlights redundancy vulnerabilities through impact simulation. Because it is cloud-based, IT advisor accesses data from anywhere at any time, delivering up-to-date cybersecurity, as well as automatic software updates and backup.
Schneider Electric has also launched Schneider Electric Exchange, said to be the world’s first cross-industry open ecosystem dedicated to solving real-world sustainability and efficiency challenges. Schneider Electric Exchange is empowering a diverse community of solvers to create and scale business solutions and seize new market value. With Schneider Electric Exchange, individuals gain entry to a vast network of technical tools and resources to develop, share, and sell digital and IoT innovations.
Moving data from one system to another can be challenging. Ensuring the data arrives intact and usable is paramount.
Eric Lakin, Storage Engineering & Data Center Engineering Manager at the University of Michigan can envision a time when the university has five to 10 petabytes of data stored in a cloud service. “If we were to decide down the road for some business reason that we wanted to stop working with that cloud service and begin working with a different cloud service,” he says, “there really isn't a mechanism in place for moving that much data from one cloud provider to another without breaking the connections that that data may have with on-premises equipment.”
More specifically, Lakin’s team moves data that hasn’t been utilized in 180 days out to Amazon S3 and S3 Gov. The on-premises storage system, tracks the location of the data. “It uses what we used to call an HSM model, a hierarchical storage management model, to move that data out, so a stub file remains behind,” says Lakin.
Even if they were able to magically move a hypothetical five petabytes of data over to Azure or GCP, their storage system wouldn’t know what was happening. “When we were investigating solutions, we were not aware of tools in place for moving large quantities of data between cloud providers while maintaining whatever symbolic links we have back to our own organizations,” he says.
Even if that were somehow taken care of, using cloud storage natively with some kind of gateway, the same problem would exist, says Lakin. “Once you've made a significant commitment to putting a large amount of data in any one cloud provider, at some point you're limited in your ability to just seamlessly transition it over to a different provider,” he says.
There would be a large capital investment, as well, to bring all the data back to on-premises storage, as well or quite honestly, to bring back large quantities back on premises. If your entire storage strategy were to change, you would have to procure a pretty large amount of on-premises storage. It would be a large capital investment to bring all that data back. “What it's really about is having the flexibility and the option to change your mind down the road with regard to who you're doing business with,” says Lakin.
Mike Jochimsen, Director of Alliances at Kaminario, agrees. “Data liberation is the ability to easily and seamlessly move data between different cloud environments without restrictions due to data formats or on-ramp and off-ramp limitations to and from clouds.”
Jochimsen believes data liberation is both a theoretical and an explicit practice that should be defined and followed. The benefit for the end user is a world in which they have access to their data no matter where it is, and storage consumers can manage the actual location of the data to manage their own budgetary, business needs. “So the end customer benefits because it's transparent to them, they don't care where the data resides at any given point in time, all they care about is the data is accessible to them,” he says.
The hybrid, multi-cloud platform is in use by most companies today, explains IBM’s Program Director and Global Offering Manager for Hybrid Cloud Storage, Michelle Tidwell. “Most companies are using an average of five private and public clouds today in their enterprises and that's just going to increase,” she says.
Clients are looking to have the kind of flexibility that data liberation could provide to help transform their businesses to take advantage of the multi-cloud environments they’re already using. “It's about moving the data and having mobility around the data,” she says. “It's also about being able to manage that data seamlessly between on-premises and cloud environments such that resources can be monitored and adjusted depending on cost and performance. That’s something we’re really focused on. Clients also have made significant investments to operationalize their environments on-premises, and being able to reuse that with public cloud infrastructures could be a big savings in OpEx”
Orchestration, too, plays a critical role in allowing cloud-native applications like Kubernetes to access this data no matter where it’s hosted. “You define the composition of an environment through orchestration layers, the automation utilities then build the environment,” adds Jochimsen. “You need to have that extensibility out to these cloud environments to seamlessly migrate apps and data across cloud layers.”
Data liberation in today’s market is mostly being talked about as how to extend current data center storage setup to take advantage of public cloud infrastructure, says Tidwell. “In that context, the cloud is used more as remote storage and archive, sometimes targeting things like S3,” she says. “However with the latest software defined storage and hybrid cloud data management capabilities, it can also be used to setup a disaster recovery capability on the public cloud or just have a more seamless workflow migration back and forth.”
Depending on the vendor, the type of solution needed, and the end use case, the techniques involved can be anything from on-premises storage-level to network-level data replication.
“I think the way it's being done today between these various proprietary clouds is through API layers,” says Jochimsen. “And so each of the proprietary cloud vendors has its own API for accessing via applications that may span different layers,” he says. “So while you can build it today through APIs, there's no standardization at that API layer.”
There’s definitely room for some standardization here, as each vendor currently must build their own integrations via programming interfaces.
There are other concerns that need to be addressed, as well. “If you consider Europe and GDPR, there are concerns there,” says Tidwell. “(Users) would probably like the idea of flexibility but yet sometimes they're restricted about moving things across different countries.”
Another technical challenge to data liberation is networking. “That can be a very challenging problem, so we need technologies and standards around software-defined networking to help ease the physicality of setting up network connections,” says Tidwell.
SNIA, a non-profit global organization dedicated to developing standards and education programs to advance storage and information technology, has already begun the conversations needed to explore and surmount these challenges, with the development of CDMI, the Cloud Data Management Interface standard. CDMI is an ISO/IEC standard that allows disparate clouds to talk a common language and eases the data movement and migration between different clouds.
Jochimsen believes that SNIA will be instrumental in helping all the players that need to be involved get together and figure out the standards that need to be in place to make data liberation an actual process, and one of them will be CDMI. “Without standards it falls back to the end consumer to build their own or multiple vendors to build very expensive solutions,” he says. “SNIA has brought us all together and provided the forum for standardizing all of those touch points that allow us to do it more quickly, more cost efficiently and save the end consumer a lot of money hopefully.”
University of Michigan’s Lakin feels similarly. “I think SNIA's role has helped us identify data liberation as something that would definitely benefit from a solution, whether it's a third party provider or some other organization that can solve this problem,” he says. “And with CDMI, the technology issues can be more easily addressed. We're trying to adopt cloud services at scale and uncover problems that maybe one organization has either experienced or has identified as a future risk, and then everybody else becomes aware of it as well.”
The conversation then helps with decision-making and strategic planning, but that’s not all SNIA can do in fostering this sort of collaboration. “At some point,” Lakin continues, “we can also use our cooperative efforts to put pressure back on not only our storage vendors, but also cloud vendors to say, ‘Look, this is a business need and we need to have this kind of flexibility.’ So hopefully there's strength in numbers, and we can insist on adoption of CDMI to help us with our data liberation programs.”
About the SNIA Cloud Storage Technologies Initiative
The SNIA Cloud Storage Technologies Initiative (CSTI) is committed to the adoption, growth and standardization of storage in cloud infrastructures, including its data services, orchestration and management, and the promotion of portability of data in multi-cloud environments. To learn more about the CSTI’s activities and how you can join, visit snia.org/cloud.
Druva is a leader in Cloud Data Protection and Management, delivering a data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence. This increases the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it.
DW talks to Druva CEO, Jaspreet Singh.
Druva is offered as-a-Service, built on AWS, and protects and manages enterprise data across endpoint, data centre and cloud workloads.
We started out ten years ago building disaster recovery software for the financial sector but by 2013 had pivoted to building an intuitive, cloud-native data management and protection solution on Amazon Web Services (AWS). By offering backup in the cloud, we can help businesses turn their data into an asset, making it more open and accessible to improve governance, resiliency, and insights. Druva is trusted by over 4,000 global organisations and we manage over 100 PB of deduplicated data in AWS, holding a number much greater than that of actual customer data.
2. And what have been some of the recent key company milestones?
It has been an incredibly exciting time for us. We have seen significant growth in the last 12 months, with our headcount expanding from under 500 people in early 2018 to more than 700 at the same time this year. We opened a new innovation centre in Pune, India and just opened a new office in Singapore as well as a new headquarters in California.
On the technology side, we have really focused on making it easier for users to backup and protect data, and consolidate it into a single location where it can easily be accessed or searched. Last summer, we acquired CloudRanger, which focuses on backing up AWS workloads, and integrated it into Druva Cloud Platform within two months. At the end of the year we unveiled a first of its-kind partnership with AWS to offer Snowball Edge devices to Druva customers free of charge, and more recently we announced a next generation disaster recovery-as-a-service.
3. And how does Druva distinguish itself within the storage networking industry?
Comprehensive data protection and management as a service goes significantly beyond basic storage or traditional backup. Those systems have been in place for the last 30 years, but the future is in the cloud, and traditional systems are no longer sufficient when data becomes dispersed between so many environments. It’s now about integrating data from endpoints, servers and cloud applications and putting in place the critical data protection, governance and intelligent analytics necessary, all while being incredibly user friendly and driving down costs.
The other major difference is velocity - speed of deployment and innovation. Druva can be deployed in as little as a week and we roll out updates as regularly as every month - things that are automatically incorporated into the user experience. There is no patching, upgrading, or downtime required.
4. And, more specifically, the Cloud storage backup and recovery market?
We are the only cloud-native data protection and management offering, and the only one available as a service. This means that Druva Cloud Platform runs in our AWS account, not the customer’s, allowing us to automatically scale resources up and down throughout the day to meet customer needs, giving them all the power and storage needed while reducing costs when operations aren’t running. This dynamic scaling and the use of S3 to store backup data allows us to offer a TCO less than half of our competitors’. Other products claiming to be “in the cloud” require customers to pay for virtual hardware in their own cloud account in order to use the vendor’s service. This virtual hardware runs 24x7 and uses block storage for backup data, and this infrastructure must be maintained by the customer, increasing costs and decreasing service levels.
5. Druva recently launched its next generation Disaster Recovery-as-a-Service (DRaaS) – can you tell us what’s new?
Absolutely. Disaster recovery has traditionally been something only available to the largest enterprises due to its cost prohibitive nature and the complicated nature of DR testing. Now, enterprises can improve their business continuity with things like recovery automation, failback recovery, tighter AWS integration, RTOs of 15-20 minutes, and simplified orchestration and testing - all while reducing costs by up to 60 percent. We’ve also incorporated seamless one-click failover to the cloud for on-premises workloads, and for the first time, recovery for cloud workloads with cross-region/account support.
6. Backup and data recovery in and using the Cloud – what are the issues that need to be considered?
Firstly, there’s a core golden rule: don’t store your backup on your primary system. Too often, companies may think their data is secure because it’s stored in the cloud, but any event on that one system, or even a single data centre within that system, can threaten the data integrity of their entire company. Data governance protocols are also critical. Storing your backups with your production system does not prevent malicious actors, or simple human error, from deleting or corrupting core information. True backup and recovery must place robust yet easy-to-use protocols around any and all changes to company data.
Additionally, in designing your backup system in the cloud, you need to understand your business demands, SLAs, and requirements for RPO and RTO. The cloud is an exceptional environment for backup and recovery, but similar to any other solution, you have to make sure it meets your business needs.
Two closely related things that should be considered when using cloud data protection is how the first backup will get to the cloud, since it is much larger than subsequent backups. Similarly, if a customer is not using our DRaaS offering they should also consider how quickly they can transfer large amounts of data for a large restore. Druva offers integrated services to address both needs.
Lastly, any cloud solution is all about shared responsibility. Making sure you and the vendor understand roles and responsibilities for security is key to ensuring a safe work environment.
7. For example, Office 365 users might be blissfully unaware that they have no safety net for their data?
Despite some earlier migration challenges, adoption of Microsoft’s Office 365 has grown unabated with some reports that it is now in use by over half of businesses. However, in outsourcing the pain, businesses are outsourcing the data. Take a look at the O365 service agreement and you’ll see acknowledgement that outages will happen but also that recovery is not a guarantee. In fact, neither backup nor recovery are mentioned in the SLA at all.
Hence while cloud SLAs are all based on up-time, there is no mention of the all important ‘recovery-time’ that is the standard for any serious data protection solution. It’s these sort of challenges that have seen Druva being adopted by a number of firms to support cloud applications, including Office 365, where we have seen 100 percent year-on-year growth.
8. More generally, companies using software-as-a-service applications need to understand just how their data is, or, more likely, isn’t being backed up by the SaaS provider?
Data protection systems integrated into a given SaaS offering do not conform to the all-important 3-2-1 rule that recommends storing your backups away from your primary. Regardless of whether the SaaS provider can recover lost data, the systems are not built to withstand massive attacks from outside sources, and have little to no recourse after a major attack. The difference between effortlessly restoring data from any corner of the business with a true backup solution is night and day when compared to the weeks of customer service support and total system reboots that may be needed through a SaaS provider.
9. So, the headline SaaS uptime quote, which might sound impressive, is rather less important than the time to recovery window?
Remember uptime and availability have nothing to do with data recovery. The system could be completely available and up, even if all your data has been deleted. We see this time and again. When core company data is compromised, and organisations are facing serious potential consequences to their business – a true recovery system is what they need.
10. For end users who have relied on traditional backup solutions and are now looking to upgrade or replace these – what do modern backup solutions offer that can make a difference?
Modern backup solutions offer ease of use, significant cost savings, can be activated in less than one hour – without the customer needing to install or maintain any hardware.
The ability to scale almost instantly, without limits, is why the cloud makes so much sense for backup and DR. It offers the scalability and performance necessary without the huge costs we’ve all associated with backup. In a traditional solution, you have to buy for peak capacity, and hope that is sufficient for three, five, or however many years your contract lasts. You are responsible for updating, patching and maintaining the hardware, and likely have to chase multiple vendors if anything should ever fail.
But the cloud is perfectly tailored for these fluctuating workloads. Companies accessing backup, especially in the event of a disaster, need limited resources for a finite time. Now they only have to pay when that storage is called upon. It’s the insurance policy you only actually pay for when you need to use it.
11. And what should end users look for when they are evaluating various Cloud backup and recovery solutions?
When looking at services, you must first look at the cost and effort of installing and maintaining your own backup infrastructure in your own cloud account. Costs for things like running VMs 24x7, database licensing, storage and egress costs and management and maintenance fees. Of course, with a cloud service like Druva, you no longer need to worry about these costs and can focus more on the end result.
There are also a number of areas of functionality that end users need to consider:
12. For example, managing the backup and recovery of a multi-cloud environment is not easy?
I wouldn’t say it’s not easy – it requires the right tools. Druva collects data and unifies backup, disaster recovery, archival and governance capabilities onto a single, optimised data set. Multi-cloud environments become challenging when you have a different solution for each vendor or build up data silos. But that doesn’t have to be the case.
13. Druva describes itself as a Data Protection company – why this emphasis?
Building on the points we’ve covered: we take the immense complexity and cost of data protection and management and offer it as a turn-key service. Additionally, the backup of data has become fundamental, but the key is looking to what’s next. Analytics, AI, machine learning? Those sort of services require a technology that is more than simply backing up your data - it requires something that provides a more comprehensive approach to data management.
Some are looking at ways to leverage the value of secondary data as well, but once again you face a challenge when doing so with traditional infrastructure. Such processing requires significant compute, but is often done sporadically. This is why many people do such processing in the cloud, where they can use burst capacity to do this much cheaper than on-premises. That means our on-premises competitors will most likely need to move data into the cloud in order to meet these needs. Our customers already have their data in the cloud, so they are in a much better position to begin using this emerging technology.
14. Before we finish, can you share one or two recent success stories – Northgate Markets and Egan, perhaps?
Northgate Markets and Egan are key examples of both ease of use and significant cost savings.
Northgate Markets is 40 retail locations with 7,000-some employees and managing very diverse data content, everything from retail marketing data to banking information. With such a range of offerings, they needed to ensure compliance with a variety of international regulations and a cloud-first backup and recovery solution like Druva gives them better visibility and control than was possible on-premises. They were able to cut costs by 60 percent, storage size by 70 percent and complete recoveries within minutes.
Egan Company is a leading specialty contractor based in Minnesota with more than 1,100 employees, selected Druva after realizing the “cloud” promised by a hardware vendor wasn’t true cloud. Since moving over, they have reduced risk and cost, increased reliability, and are saving of more than 30 hours a month for maintenance and storage management of the firm’s environment.
Founder and CEO, Jaspreet Singh, brings a combination of product vision and general manager experience that has allowed Druva to be one of the fastest-growing companies in the $28B data protection and management market. His entrepreneurial spirit enabled him to bootstrap Druva, which has now raised some $200M in venture funding and over 4000 customers worldwide. His market and technology insights have led him to create the first and only cloud native, Data Management-as-a-Service company, delivering innovative technology solutions and distinctive consumption models that are disrupting the classic data protection market. Prior to starting Druva, Jaspreet held foundational roles at Veritas and Ensim Corp. Additionally, he holds multiple patents and has a B.S. in Computer Science from the Indian Institute of Technology, Guwahati.
About Druva:
Druva is the global leader in Cloud Data Protection and Management, delivering the industry’s first data management-as-a-service solution that aggregates data from endpoints, servers and cloud applications and leverages the public cloud to offer a single pane of glass to enable data protection, governance and intelligence–dramatically increasing the availability and visibility of business critical information, while reducing the risk, cost and complexity of managing and protecting it.
Druva’s award-winning solutions intelligently collect data, and unify backup, disaster recovery, archival and governance capabilities onto a single, optimized data set. As the industry’s fastest growing data protection provider, Druva is trusted by over 4,000 global organizations, and protects over 40 PB of data.
Better, faster and more efficient. The demand for software delivery which employs agile and DevOps strategies continues to accelerate despite the evolving security threats that are pervasive throughout the software lifecycle. What’s more, these threats are typically not identified until the final stages of development due to the growing rift between software development teams and security teams, the clashing objectives of these teams, and an ever-growing skills gap on both fronts.
By Erdem Menges, Fortify Marketing Manager, Micro Focus.
According to the National Institute of Standards and Technology, the cost of remediating security flaws is thirty times more expensive in production and ten times more expensive in testing than it would be if they were caught in early stages of development. This opportunity to fix issues sooner at a lower cost has encouraged the widespread adoption of application security as an integral part of development models.
DevOps can lead to creating and releasing security defects faster
In 2018, after login credentials were left on an unsecured IT administrative console which was not password protected, Tesla’s Amazon Web Services (AWS) account was hacked and used to mine crypto currency. It was a costly breach that hit the bottom line of the organization and exposed proprietary data. In 2016 Uber experienced an even more severe breach when the company’s GitHub repository was targeted and the login credentials for its cloud provider account were exposed. In this case, instead of using the cloud provider account as free resourcing, the records of 57 million customers and drivers were leaked.
Today, there is a spotlight brighter than ever before on security and privacy in the form of legislations such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA) in the US. By both increasing penalties for consumer data loss and heightening the potential reputational damage involved, such regulation demonstrates how vulnerable and costly security defects in applications can be for any organisation. The security risks associated by the faster release cycle involved in Agile and DevOps can be pinpointed to a series of factors which are largely unique to the practice within the business environment.
First, speed still tends to be the sole objective when utilizing DevOps practices. The value of DevOps to an organisation is mostly focused on its ability to operate more rapidly than before. For example, a 2017 Gartner report predicted that by 2020, each application developed will have 30 releases per year in order to stay aligned with the demand from customers and partners. The increasingly rapid time-to-market requirements will continue to blindside or cripple organisations’ security practices, forcing them to work reactively and under the enormous pressure of the significant financial and reputational risks at play.
Second, DevOps predominantly operates in the cloud, leveraging the power of micro-services, where the ways of restricting and protecting the DevOps environments are less known. Compounding this issue is the fact that much of this new development architecture is built upon relatively new open source tools. The developers of these tools are also pushing for more frequent updates, trying to keep up with the increasing pace of change in the demands from their users, and so these tools add their own security challenges. Moreover, DevOps environments often rely on a diverse range of open source components. As these components each have inconsistent standards, logging formats, and API hooks, this can make it extremely difficult to maintain holistic security oversight across the many components that make up a fully functioning application.
Third, DevOps teams are usually spread across geographically dispersed regions. One of the key upshots of the fact that DevOps teams typically work in the cloud is that organisations can leverage the expertise of various groups working across different locations. This can also be used to generate competitive advantage through improving retention of key employees by giving them the flexibility to work from the location which best suits them, rather than having to be in proximity to whichever product team they are currently working within. This typically means each team is employing a different culture with different sets of tools – development, QA, and monitoring, for example – potentially impacting telemetry and potentially creating more, varied vulnerabilities.
Finally, being part of a DevOps team often means being given the keys to everything. Out of necessity, DevOps often involves less restricted access to test and production environments, in order to open up flexibility and eliminate friction from the collaboration process. But, as the Tesla and Uber examples show, this creates a lucrative target for hackers for whom small flaws can be exploited to cause massive impact.
The solution to these numerous challenges lies in the evolution from DevOps to DevSecOps and to a seamless application security mind-set. This includes security becoming an integral part of the development culture which incorporates ongoing, flexible collaboration between dev/testing/release/ops engineers and security teams.
Five steps towards making application security seamless
For a DevOps team to succeed, security planning and security checks must be a foundational element built into the continuous integration and continuous delivery (CI/CD) pipelines. That means adopting a DevSecOps approach that employs a seamless application security methodology and a five-step process to embed security:
1) Develop with security in mind
Developers outnumber security professionals in every organization, so empowering developers to take responsibility for the security of their own code is a critical first step. This means we need more than simply educating developers on security: It’s imperative to provide tools that give real-time security feedback.
2) Test early, test again, and then test again
Adopting static application security testing (SAST) is a good step to take, as it will identify the root causes of security issues right from the outset of the coding process. Meanwhile, having tools and integrations that will provide real-time security feedback for fresh code helps keep the DevOps teams’ focus on security.
3) Place security within lifecycle management
Dealing with the dispersed nature of DevOps within an organisation requires strong lifecycle management tools, which will perform security scans as part of a build, to immediately expose vulnerabilities and provide teams with the information needed to track and fix them.
4) Provide automated security tools
In order to maximise the limited resources that most organisations have in their possession, automate security tests in the same way that you automate unit or integration tests. By employing automation, the frequency of tests can be increased at minimal cost which will enable teams to fix identified vulnerabilities much quicker.
5) Post-release monitoring
A final security tool that organisations should leverage is runtime application self-protection (RASP) and continuous dynamic application security testing (DAST). These approaches focus on applications in production, protecting the environment from risk profile changes and zero-day vulnerabilities.
In the end, ensuring a seamless approach to application security that minimises the amount of disruption to the development team is critical for securing DevOps. Through the integration of an end-to-end application security solution that covers the entire software development lifecycle, organisations will ensure that security is no longer an afterthought or a barrier to innovation.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 1.
A perspective by Philip Low, Chairman of BroadGroup.
The United Nations predicts that by 2050, 68% of the world population will live in urban developments; 60% in megacities of over 10 million people. So, over 6.5 billion people living in cities and it is only headed in one direction. The dystopian view has tended in fiction to future cities that will become grid locked, cess pits of pollution, with residents struggling to breathe and avoid contagion of one sort or another. Optimists in the genre hope for a better outcome; some putting their faith in the discovery of cheap, non-polluting power, others in off-loading population to other planets. None of this seems likely in the foreseeable future, in the meantime, the best hope is probably smart cities.
Smart cities are being created at the confluence of interlocking new technologies: The Internet of Things (IoT), Edge Compute, fibre optics, artificial intelligence (AI)/machine learning (ML), 5G and Flash-memory. All of these things may be coming together in the nick of time to ensure cities become liveable spaces not wastelands. Between them, these technologies have the potential to enable us to manage everything from traffic flow to street lighting, parking to security and waste management to messaging. They can also help manage the atmosphere inside and outside buildings and make our energy production, distribution and usage more efficient. So, we come to the other dystopian vision – big brother is watching you. Smart cities depend on data but how much are the citizens willing to trade off loss of personal data security for the potential benefits.
The key to smart cities is data
IoT will provide the data, with sensors everywhere inside and outside buildings, to monitor traffic hot spots; bins that need emptying; parking metres that are available or not working; CO2 levels and other pollutants; demand on utilities and public transport and always more. There are still problems to iron out: as always with new technology, we have to develop standards that will ensure systems can communicate with each other, to pull data out of the silos. This in turn raises the risks for cybersecurity to address. There is also the minor problem of whether we can produce and power all the sensors envisaged. By all accounts we are talking in billions and into trillions – that is an awful lot of batteries or power connections to manufacture. Scepticism perhaps should rise in proportion to the number of zeros being bandied about.
The backbone of any smart city will be, and in some cases already is, a fibre optic spine that provides connectivity: public service Wi-Fi, data collection and dissemination, communications, transportation, education and other public services. This has to be regarded as infrastructure, a shared facility, just like a road, along which anyone can transport their data and communications. This raises the perennial problems for technology, standardised connectivity and partnership funding.
5G, which is still a decade away in development and full deployment even in major cities, will, provided enough investment is found, eventually create capacity to handle huge volumes of connections operating in a smart city. Connection density will increase from the current 4G of 2,000 to over 100,000 active users/sensors per square kilometre, alongside the much-hyped increase in speeds to gigabits per second and much reduced latency.
Edge Computing will not replace data centres but will enable data to be processed locally – some predict around 75% of all data by 2025 - and the development of local applications. Take, for example, video analytics for traffic management. It makes sense to avoid the bulk movement of this kind of data and process it locally, which will also match the latency requirements for micro-management of adjustments in traffic handling. There are security issues that require local storage and processing of data, sometimes for legal compliance which, for example, may not allow data to be moved out of a legal jurisdiction. Networking a number of edge sub-stations together creates a local distributed computing resource and Wi-Fi network. The savings on backhaul costs could be significant, coupled with reduced latency and greater resilience.
The introduction of Artificial Intelligence (AI) and Machine Learning (ML) plus, now deep learning, will surely mean that the enormous increase in data can be handled and processed ever more efficiently and productively, in time. Other technologies are also emerging to help things along. Flash-memory and NVRAM technologies are finally reversing the stupendous growth in cooling and drive wattage costs, as well as floor space for racks, in both data centres and edge sub-stations.
Revolutions
So, the technologies for smart cities are exciting but we are already seeing the results of a total focus on technology which forgets the people who have to live in our smart cities. Examples exist from earlier industrial revolutions where steam and water power first brought people together in grim, polluted cities. The second industrial revolution gave us mass production and the third automation and electronics. None of these were unqualified successes for the growing city populations. Digitalisation and data are being heralded as the fourth industrial revolution and need to genuinely produce quality of life as well as efficiency in production.
To get smart cities right we need to clarify who is responsible not just for the infrastructure but also for the data: creation, management and security. That means addressing the regulatory issues from city planning to sharing infrastructure and controlling polluters/pollution to the imposition of cyber security standards. “Smart Cities” need to decide what they are for. They may be looking to drive economic growth by attracting certain types of businesses through the services they can provide access to or the integration of physical transport to attract manufacturing or distribution. Perhaps first they need to address keeping data secure and anonymised. IoT is all very well but it is a melange of personal, corporate and city data with many permutations of which data is useful to different participants but paramount is what data citizens are willing to have shared.
Leading companies in the digital age are beginning to realise, as some of the industrialists in the 19th century did, that all their workers and those that service them, have to live somewhere. So, when hi-tech jobs drive prices of housing upwards, somehow provision has to be made for the cleaners and shop staff, drivers and teachers that will enable the high earners to work productively. Sheer numbers mean we have to rethink how cities operate – better transport solutions, better atmosphere control, inside and outside buildings, better planning to meet housing needs and leisure needs.
Data for all
At this year's Datacloud Europe, Global Congress & Exhibition, which took place in Monaco, brought together many of those who make decisions on how smart cities evolve. This means both existing major cities that are being ‘retrofitted’ with fibre optic spines and IoT and the new megacities that will blossom, particularly in Africa and Asia. Data, its generation, collection, processing and application has to become leaner, less wasteful of resources in operation and cooling. It also has to become better at identifying the needs of all those who will live in the smart cities of the future.
In the software world, being fast is vital. Not only is being first to market important, but teams must also respond to customer demand by adding new features regularly and eliminating bugs quickly. Indeed, many enterprises are pushing out several new software updates every working hour as they turn to cloud native DevOps to drive faster innovation.
By Eran Kinsbruner, Chief Evangelist at Perfecto.
As most developer teams know, an increase in speed, combined with more app complexity, can result in poor UX and, ultimately, customer dissatisfaction. It is this need for speed that has threatened to create unstable, unsustainable test environments, where developers are more stressed, less focused and less productive.
As an antidote, we have seen Continuous Integration (CI), Continuous Testing (CT) and Continuous Delivery (CD) emerging as drivers for maintaining quality whilst developing at speed. Of the three, Continuous Testing is by far the most challenging, but for us at Perfecto, it’s the only way to allow teams to iterate more quickly whilst still prioritising quality.
So, as more teams integrate CT into their strategies, it is clear that understanding, and filtering the resulting data quickly is critical in order to prevent bottlenecking the DevOps process. The difficulty is, that teams simply don’t have the time to analyse the vast amount of data that CT generates. Indeed, research at Perfecto tells us that organisations spend between 50-72 hours per regression cycle analysing test results, filtering out noise and assessing failures which may impact their software releases.
So how can teams overcome this and make CT work for them? Let’s consider the five most useful tools that teams need in their toolkit in order to quickly and efficiently analyse data, triage issues, and act upon failures with the best possible insights.
1. Executive dashboards
If the problem is that automated testing can create so much data that it’s tough to focus on what’s important - it’s crucial to get the right test reporting tools to help cut through the chaos.
Dashboards have evolved significantly over the years, allowing developer and Quality Assurance managers to easily examine the pipeline, see CI trends related to time, build health and more. Simply put, a DevOps dashboard can mean the difference between spotting a bug before your users do, and avoiding a mountain of support tickets from angry customers.
However, there is plenty of choice in the market - so when you pick your solution, make sure it includes quality heatmaps and CI dashboards. These dashboards make it easy to spot an anomaly in the CI pipeline, allowing managers and practitioners can quickly drill down into the single test report and into the issue.
2. Single test report visibility with advanced reporting artifacts
With a single test report, practitioners are able to access the entire flow of test steps, videos, logs, screenshots as well as access to Jira – and even have the ability to drill down into the source code.
Similarly, a multiple execution digital report that enables quick navigation within a build execution allows teams to quickly identify potential problems - meaning more time fixing and less time searching.
3. Cross-platform reports
Cross-platform testing has become one of the key tenets of successful product development. Verifying the suitability of your solution to work across various different platforms is vital to ensuring a good user experience - and there’s a lot to keep up with in an ever-changing landscape of smartphones and browsers; things such as visual differences between mobile platforms and desktops can be identified through such reports. In addition, platform specific anomalies will be highly visible in cross-platform reports that compare the screens across multiple form-factors and operating systems.
If you’re in a CT environment, you’ll be running regression, smoke and other tests across an array of platforms, and you’ll need effective cross-platform reports which show the UI/UX simultaneously across multiple screen sizes, resolutions, and platforms.
4. Noise reduction tool, powered by AI and analytics
Today, finding which error classifications are true application bugs is critical to efficiency. Noise reduction tools that are powered by AI allow teams to filter out difficulties that are caused by various issues that are not software defects – for example, things like device connectivity issues and wrong scripting. Having a tool with the ability to locate each failed test execution, the root cause, and the classification category is a huge productivity boost for both the test engineer and the developer.
5. Actionable insights with prescriptive ways to resolve issues
There are plenty of analytics tools on the market, but the most effective are those that help convert data into actionable insight for better and faster decision-making. Indeed, once an issue is detected, classified and reported, having deep insight brings practitioners halfway towards the resolution of the issue. Remember, data is only valuable when it’s being used to change or improve something - and it’s important to consider this throughout the CT process.
We know that the modern approach to delivering software is about agile and fast development cycles because users expect the steady flow of new features and updates to work perfectly, without compromise. The challenge for IT is to deliver quickly whilst maintaining user experience, and implementing the right tools is a critical component in this quest.
So, as you work to build your next test analysis, consider these five tools to efficiently evaluate the data, act upon it, and deliver iterations and features quickly, and with confidence.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 2.
By Andrew Palmer, Consulting Director, Telecoms, CGI UK.
More and more people are living in cities. In fact, over 50% of the global population now live in urban areas and this trend will continue. Cities play a dominant role in global consumption, production and pollution, which in turn associates them with some of our major environmental and ecological problems such as air pollution, greenhouse gas emissions, waste, and poverty.
However, the concentration of population, activities and resource use in cities also brings potential for important efficiency increases, as well as for multi-purpose solutions, combining different but complementary sustainability goals. Cities are also centres of innovation and creativity, where incredible changes are possible. The emerging concepts of Smart Cities and Connected Places encourage structural transformation processes that can effectively direct future urban development towards sustainability.
In Smart Cities, a network of sensors, cameras, wireless devices and data centres are the key infrastructure, supporting the provision of essential services in a faster and more efficient manner.
A network of sensors can facilitate the optimal use of resources with connectivity to tell citizens when and where to conserve time, money and energy. These sensors can control, detect and manage unnecessary use and make certain adjustments according to need. By adding Intelligent Automation, Advanced Analytics and Machine Learning to interpret and act upon the data gathered from these sensors, it is possible to move from purely reactive to predictive and then cognitive maintenance and support models for the services supplied and resources consumed.
An example of this is water management. With sensors fitted on every pipe, water leaks can be easily detected and corrected before any heavy losses occur. Besides this, the irrigation systems in public parks can automatically turn off whenever rain is detected in order to save water. At CGI, we have recently worked with Northumbrian Water to develop the concept of using existing, low cost sensors, packaging them into a cylindrical tube that is placed into the cistern of a toilet. This monitors various components of the mains water flowing into a property, in order to measure the quality of the water. This insight into water quality has not only helped customers understand various elements like water and temperature, but has also enabled water utilities to monitor water quality on their network remotely by connecting devices over a wireless communication network back to their operations.
Other examples in operation or experimentation phases include:
Sustainable, urban economic development must encourage symbiotic relationships between industry, government, academia and citizens. This is to ensure the sustainable management of human ecological and economic capital. The negative implications of over-consumption are particularly evident within cities. By defining an improved quality of life and then forming the economic and governance frameworks to make them sustainable, it will be possible to outline how to design, support and govern more sustainable cities. People can subsequently enjoy an increasingly comfortable and greener quality of life.
Furthermore, intelligently designed cities can respond to the major environmental, social, and economic challenges of the 21st century. We have some great examples of this change in approach, in places such as Barcelona, Vienna, London and Liverpool.
Cities therefore represent both a complex challenge and an amazing opportunity for greening our economies and advancing sustainable development. In the UK, as mentioned earlier, the Government has pledged £1 billion in central investment, to be matched by interested parties, in a series of testbeds and trials aimed at moving the UK to the fore in 5G deployment globally. The most recent phase, the Urban Connected Communities project, awarded to West Midlands Combined Authority in April 2018, aims to build the fibre, 5G and sensor/device infrastructure required to support a region of more than 500,000 people. Existing testbed learning and use cases are expected to be combined with new innovations to provide a sustainable connected intelligent environment that can be used as the blueprint for all future UK Smart Cities.
The rural picture
Up until now, the majority of investments and the focus of research have centred on cities, which could lead to a “two-speed” 5G world, as the business case for investment in sparsely-populated and remote areas is that much more difficult to prove. The reality is that a lot of the use cases being developed and trialled in urban and city environments will have equal applicability in the rural environment, provided the funding or subsidisation can be found.
However, use cases specific to the rural environment do exist and have to be considered now in order to allow the rural economy and society to benefit in the same timeframe as their urban relatives. There are examples such as “MooCall”, where sensors attached to cows are used to predict how close to labour heifers are, meaning farmers are able to leave them at large for longer but still have knowledge of their medical condition.
In the UK, there are two specific 5G testbeds being funded to look at the impact of 5G and sensors on rural environments, which are also part of the Government’s £1 billion investment. These are looking at both connectivity and Smart/Connected use cases, which will then be taken and tested at a larger scale when the recently-announced Rural Connected Communities (RCC) project is launched.
What next?
5G will accelerate the use of wireless sensors and sensor nodes, as well as enable advances in virtual reality (VR), augmented reality (AR), artificial intelligence (AI) and other areas. The present prediction is for around 30 billion sensor devices to be actively deployed and in use by 2025.
5G networks are the next generation of mobile internet connectivity, offering faster speeds and more reliable connections on smartphones and other devices than ever before. These networks will help power a huge rise in IoT technology, providing the infrastructure needed to carry substantial amounts of data. In turn, this will allow for a smarter and more connected world.
Businesses have been scrambling to leverage the power of the Internet of Things (IoT) for years. It’s a market that continues to expand, and is expected to reach $933.62 billion by 2025. As it flourishes, enterprises are flocking to get a share of the pie. But it has now become clear that the pace to market is being prioritised over security. That’s because the opportunity is huge, but so is the risk.
By Martin Wimpress, Developer Advocate at Canonical.
The Department of Defence and Security (MOD) reported last year that 90 per cent of recorded security threats were due to defects in software. It comes all the way back to poor software development. But where are those threats actually coming from; and how do those at the coalface, the developers, react?
The historical issue
The Linux ecosystem has always strived for a high degree of quality. Historically, it was the sole responsibility of the community to package software, gating each application update through a careful review process to ensure it worked as advertised, and on each Linux distribution. This proved difficult for everyone involved.
User support requests and bugs were channelled through the Linux distributions, where there was such a volume of reporting that it became difficult to feed information back to the appropriate software authors.
As the number of applications and Linux distributions grew, it became increasingly clear that this model could not support itself. As a result, software authors took matters into their own hands, often picking a single Linux distribution to support and skipping an application store entirely. They lost app discoverability and gained the complexity of running duplicative infrastructure.
Waking up to the threats
The past year has demonstrated for many that, while software updates have not become substantially easier for end users to manage, the frequency and impact of security vulnerabilities make the process unavoidably necessary. It is no longer acceptable to consider any connected software a finished product. Software maintenance must stretch to cover the entire lifetime of the product, especially in today’s connected world.
This realisation was made even more prominent by the likes of Spectre and Meltdown – a huge wake-up call for every industry. It did not matter what space you worked in, it very likely affected normal business. Add to this the rise of robotics and edge computing, which bring more devices online, and the threat becomes even more widespread.
Canonical carried out research with IoT professionals, and found that over two thirds of respondents felt that a lack of an agreed industry security standard worried them when it came to IoT. To further compound the issue, nearly a third of respondents claim they are struggling to hire the right talent when it comes to IoT security. So the problem does exist, and there is widespread awareness, but without the right skills, businesses are relying on their developers to ensure that their software is robust.
Increasing pressure on the developer
This has placed increased responsibility on developers at a time when the expectations of their role are already expanding. They are no longer just the makers behind the scenes; they now bear the risk of a breaking robot arm with their code, for example, or bringing down MRI machines with a patch. As an industry, we acknowledge this problem without acting on it. You can have a bad update because software is not an exact science - but we then ask these developers to roll the dice and compromise on security for the sake of innovation.
On the other hand, developers can trade the evolution and growth of their software for a sense of safety, by treating their code as immutable. A device can ship and it will never be updated. Developers are being driven to this approach taken by those device makers who view clogged support lines as being much more inconvenient than facing down a security breach. The industry continues creating ever more software components to plug together and layer solutions on top of. Not only does the developer face the update question for their own code, they must trust all developers facing that same decision in all the code beneath their own.
How then can developers under these pressures deliver on the promises of their software with predictable costs? The challenge does not have a silver bullet, but a solution may just lie in snaps.
Extending the arsenal
Snapcraft is a platform for publishing applications to an audience of millions of Linux users. It enables authors to push software updates that install automatically and roll back in the event of failure. The likelihood of an errant update bricking a device or degrading the end user experience is, therefore, greatly reduced. If a security vulnerability is discovered in the libraries used by an application, the app publisher is notified so the app can be rebuilt quickly with the supplied fix and pushed out.
Because snaps bundle their runtime dependencies, they work without modification on all major Linux distributions. They are tamper-proof and confined. A snap cannot modify or be modified by any other app and any access to the system beyond its confinement must be explicitly granted. This precision definition brings simpler documentation for installing and managing applications. Taken with automatic updates eliminating the long tail of releases, more predictable and lower support costs are typical. There is minimal variance between what the QA department tests and how the application behaves on the end user system configurations.
Snapcraft also provides powerful tools to organise releases into different release versions (channels). One set of tools can be used to push app updates from automatic CI builds, to QA, beta testers, and finally all users. It visualises updates as they flow through these channels and helps you track user base growth and retention.
Empowering developers with the confidence to build
The difference between a real threat and a hypothetical one is not that different. You essentially have to ‘build for failure’; assuming that something ‘cannot go wrong’ will cost you the most when it does because you don’t have a plan.
This is the approach taken by snaps. Instead of treating software updates as a risky operation and only employing them in the rarest circumstances, snaps acknowledge that updates will fail. When they do, the snap rolls back to the last working version, all without the end user experience being compromised. The developer can then investigate without time pressure.
But because software-first companies are created all the time it places untold pressure onto these teams. There needs to be a compromise and it shouldn’t be in security. We’ve seen in the past year the kind of damage that cyber-attacks can cause. The stakes are too high to ignore the capabilities that the likes of snaps can offer.
Developers are the lifeblood of so much of what is being built, and are ever more vital to operations across the entire business. So why wouldn't we empower them? Whether you’re building for a desktop, cloud, or IoT solution, embracing open source and snaps will keep users up to date and make system configuration issues less likely, freeing developers to code more and debug less. By following this lead, developers will have the time, but crucially the confidence, to continue building great things.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 3.
By Michael Winterson is Managing Director of Services, at global interconnection and data centre company Equinix.
According to IDC, the smart city market is projected to grow from $81 billion globally in 2018 to over $158 billion in 2022[1]. But what constitutes a smart city?
It’s a phrase that’s banded about a lot these days, and with so much hype often comes a great deal of confusion. There are many different ‘visions’ for examples for what a smart city is or could be. By definition a smart city is one that incorporates information and communication technologies to enhance the quality and performance of urban services such as energy, transportation and utilities in order to reduce resource consumption, wastage and overall costs.
But while many cities are in the process of adopting smart technology with this ultimate vision in mind, how can businesses ensure they are also becoming truly ‘smart’?
The backbone of a smart city
When people speak about smart cities, they often imagine a futuristic metropolis; a world completely apart from the one we’re used to. In reality, smart cities will be the product of the ongoing and gradual evolution of information technologies, many of which we’re already familiar with in our everyday lives.
Along with the wider availability of high-speed internet access and the growing number of Wi-Fi-enabled devices and sensors, the Internet of Things (IoT) technology has become increasingly sophisticated, allowing connected things not only to link to the internet but also to talk to each other – a necessity for any smart city. A smart city will need to utilise IoT, to integrate information technology, which in turn will optimise the efficiency of operations and services. IoT will become the backbone of a smart city.
This increasingly sophisticated technology gives rise to a massive amount of data collected from everyday objects. Mining multiple data sets will give us not just a single-dimensional view of the world but multi-dimensional perspectives and insights. In the context of smart cities, these insights can help better urban planning in terms of transportation systems, water and electricity supply, waste and pollution management, and more. The IoT will enable us to constantly sense and process information from the outside world in an effort to bring efficiency to everyday life in a city.
Businesses can also harness this data to improve their customer services and everyday experiences. By recording and transferring data to monitor important processes, IoT devices give companies new insights, boost efficiencies, and allows them to make more informed decisions, offering them the opportunity to be ‘smart’.
Interconnecting the city
A truly smart city requires digital infrastructures that can physically link dispersed sensors, devices and machines that make up public systems, services and experiences, so they can exchange information in real time. This type of city depends on fast network speeds and minimal latency to exchange data traffic between billions of end devices and their edge nodes and core clouds. Placing network hubs close to users, data and clouds will enable these cities to operate better and provide a better citizen experience.
With 200 International Business Exchange (IBX) data centres located in key metros around the world, Equinix enables this kind of distributed architecture. Through Equinix Cloud Exchange Fabric (ECX Fabric) a smart city architect can directly, securely and dynamically connect the distributed infrastructure and applications a city needs. It enables data centre to data centre network connections on demand between any ECX Fabric locations within a metro or globally via a software-defined interconnection. By using interconnection, the private exchange of data, companies are also ensuring that they can connect to one another without fear of being hacked. In fact, with data becoming so valuable, the need to create data markets is practically inevitable. These markets will help sources of data feed into artificial intelligence systems in a way that not only protects both the data and the system but also coordinates the transfer of value (ie: charging). It also allows companies to bypass the public internet, keeping the data flowing between organisations secure. With potentially hundreds of disparate sources and owners to interact with the similar number of algorithms at speed, latency and throughout will be key - two reasons for interconnection.
Although this seems like a big shift in development for organisations, putting simple infrastructures in place can ensure they are adapting to changes of a smart city. This will be a long-term journey where, over the next decade, numerous sophisticated applications will provide value and improve the quality of life for its citizens.
For businesses and countries alike to avoid losing out to competitors, they must ensure they are adapting their digital infrastructure to keep up with the demands of customers/residents/visitors in a smarter city.
[1] IDC’s Worldwide Semiannual Smart Cities Spending Guide, 2018
If you’ve seen Netflix’s new series, “The Society” then you’ll know it follows a group of highschoolers coming to terms with the sudden disappearance of all adults. With no explanation as to how their parents disappeared, the teenagers have to adapt and learn how to survive independently. With complete social and economic collapse, they have no consistent food supply. This means they have to learn how to farm their own food in order to guarantee supply for an uncertain future.
By Martin Hodgson, Head of UK and Ireland at Paessler.
You may wonder what this has to do with feeding a world of 9 billion people – and the role technology may play. But, this is a striking allegory for how we need to develop more sustainable farming practices if we are to guarantee food-supply for the ever-growing human population. It’s a topic on all of our minds. Global food demand is on the rise. In fact, according to McKinsey, if current trends continue, then by 2050, caloric demand will increase by 70 percent, with crop demand for human and animal consumption set to increase by at least 100 per cent. These demands are shaping our agricultural markets in a way that we have never witnessed before.
Agriculture is an industry that is no stranger to technology, especially when it comes to the Internet of Things (IoT). Putting technologies like blockchain and artificial intelligence to the side, IoT is revolutionising the industry. By enabling intelligent objects to connect to one another and to the outside world using the internet – it’s possible to be smarter in our approach to farming. Utilising sensors and processors make it much more feasible to farm in real-time. The question is, are enough farmers using IoT technologies, and are they doing so sustainably?
Agriculture of Things
IoT can be adopted into agricultural practices in many ways. One interesting example of this is precision farming. This is a method that pursues the goal of managing agricultural land in a site-differentiated and targeted manner. Take a farm full of cattle for example. By implementing Precision Farming Technologies (PFT), it becomes possible for the farmer to monitor each individual animal on the farm. This means monitoring the animal’s temperature, nutrition levels and also monitoring for illness or stress. This enables livestock farmers to identify any poorly animals – and to treat them and get them back to health faster. Whilst each farmer will monitor different things, dependent on the livestock they are looking after – the idea remains the same. IoT sensors provide real-time insights into each animal, ensuring livestock welfare, sustainability and minimising loss of produce.
Another example of using IoT in agriculture is Variable Rate Technology (VRT). This is similar to precision farming, but instead of measuring livestock, measures site-specific or partial-impact soil tillage using differential global positioning systems.
Smart vineyards
To showcase smart technologies at play in the agriculture world, a great example is smart vineyards. For vineyards to remain sustainable, we must be aware of the impact that global warming is having on viticulture. This means preparing for rapid weather changes, and extreme weather conditions.
This is why today, we’re witnessing more and more winegrowers placing reliance on sensors distributed around the vineyards. The sensors are relied on to send environmental data, drone images and information about the composition of the leaves to cloud platforms. The result – an easier way to plan work on a day-to-day basis. Essentially, thanks to IoT technology – wine growers get much more peace of mind.
Work smarter, not harder
Being smarter in our approach is more important than ever before. It’s important farmers understand that IoT does not remove them from their social responsibilities. Previously used unsustainable methods like poor irrigation practices have contributed to the loss of about one quarter of land used for agriculture over the last 25 years. To put this into perspective, this means that every three years, an area the size of Germany is lost to deserts. Whilst, the world’s population has grown by about two billion in the same time period.
This is why innovative technologies are needed, and IoT isn’t the only one. Hydropnoics, the art of cultivating plants with water is another approach being adopted today. It means less water is consumed, it’s possible to better control nutrient supply, yield and quality can be improved and no herbicides are required.
When it comes to implementing change for the better, what we do now will be integral for ensuring a more sustainable future. Agriculture industries across the globe need to utilise modern technologies in order to secure a reliable global food supply for the coming decades.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 4.
By Mark Lippett, CEO at XMOS.
Analyst firm IDC predicts that worldwide IoT spending will reach $745 billion in 2019, a significant increase from the $646 billion spent the previous year.
As we enter an era of intelligent connectivity, speed, convenience, and intelligence will converge to deliver highly contextualised and personalised experiences, when and where you want them. This ‘ambient technology’ will be embedded around us, accessible by all (regardless of knowledge or experience), enabling seamless interactions that enrich our daily lives.
This shift brings significant implications around privacy and capture, storage and usage of data. Though today’s cloud service providers will play a central role in engineering the future, it’s possible that cloud infrastructure alone isn’t capable of supporting the complexity of the hyper-connected world we’re moving towards.
Could rebalancing compute from cloud to edge represent a solution?
No longer on cloud 9?
One of the great benefits of cloud platforms is that they enable users to manage and work with more data than they ever could before—from the biggest enterprises right down to the smallest start-ups. The issue is that although the centralised cloud model is sufficient for today’s IoT ecosystems, the infrastructure will quickly become overwhelmed when billions of devices are involved.
The explosion of IoT brings with it a great range of possibilities for businesses in all industries, but it also creates some significant technical issues that will have to be overcome. Though the centralised cloud model is ideal for unpinning IoT ecosystems as they are today, issues will likely arise as we see an exponential increase in the number of connected devices.
We’ll soon be faced with the challenge of managing the control of billions of connected devices, all the while ensuring the data transmitted between them remains secure. The enormity of this issue suggests that a wholly cloud-based model may not scale as our interactions with AI devices become increasingly frequent and complex.
Irrespective of current investment in 5G infrastructure, the sheer volume of data passing through IoT will also increase the risk of the user experience being degraded by network connectivity or latency issues. City-dwellers are the most likely to suffer performance drop, due to the proliferation of smart sensors and connected devices at work within urban environments.
Thankfully, these challenges are not insurmountable. The ability to reduce the amount of data transmitted, to provide higher-level context to the user experience, and to use smart devices on the odd occasion the internet is unavailable, are all on offer with the shift to Edge-AI.
The shift to the edge
So, what exactly is Edge-AI?
Edge-AI refers to the practice of moving data processes closer to the source of the data—in this case the connected device. Decentralising these processes alleviates the burden currently shouldered by centralised cloud systems.
Today, artificial intelligence and machine learning computation is often performed at large scale in datacentres. However, we’re seeing an increasing number of IoT devices equipped with AI/ML capability, which means computation can be performed at the edge of the network.
Edge-AI—intelligence held at the local device level—can respond quickly, without waiting for a response from the cloud. If inference can be performed locally, there’s no need for an expensive data upload, or for costly compute cycles in the cloud.
Edge-AI poses a viable solution to the data security conundrum, by reducing the quantity of data transmitted. With less data bounding around the IoT ecosystem, there will automatically be fewer opportunities for privacy breaches.
Performing all functions without the need for a connection, Edge-AI also does away with the latency issues associated with cloud-based control. It also enables user experience designers to personalise the interaction with remote AI entities through the fusion of sensor data. All this means that service disruptions will be reduced, and the user experience improved.
Essentially, the kind of intelligent insight previously generated through centralised AI processes will be available in real-time on the device itself. Although AI in the cloud is thought of as a huge collective intelligence, AI at the edge could be compared to a hive mind of many local smaller brains, working together in self-organising and self-sufficient ways.
The future of IoT and the transition to the world of ambient technology will likely hinge on our ability to decentralise networks with the help of Edge-AI. Total cannibalisation of the cloud may not be the ultimate endpoint, but Edge-AI will enable a rebalancing of compute utilisation from core to the edge, where algorithms will learn from their environment to make locally-optimised decisions in real time. This will allow businesses to capitalise on the vast amounts of data collected, rather than becoming overwhelmed by it.
Consumers want retailers to provide them with a transparent, convenient and personalised experience. Innovative retailers that are leading the way are using an artificial intelligence (AI) and data-driven strategy to improve customer experience, all the way from the initial acquisition period right the way through to delivery.
By Mylo Portas, Retail Customer Success Manager at Peak.
There’s more value locked inside a company’s data than many realise. And, more importantly, as every business (retail or otherwise) is different, that data represents a potential competitive advantage. If a retailer’s data can be unified across the business, it can then be utilised for ML techniques to better understand the preferences of their customers, inform their business strategy, and help them to deliver better bottom line profitability.
The opportunity is clear to see, as it’s been shown that customers who feel they are receiving a bespoke experience are far more likely to shop with that brand again. Meanwhile, retailers missed out on a potential £2.6 trillion in revenue in 2018 due to poor customer experience.
This represents a massive opportunity for brands that embrace data as part of their business strategy. Consumers today want a unified commerce experience when shopping with a brand; whether it’s online or in-store, and it’s data and AI that can make this customer journey as frictionless as possible.
The role of data and AI in retail
While statements like these may feel abstract and intangible, they’re not. Data about transactions and interactions describes your customers, their preferences and behaviours. It can be used to predict what products they’ll buy and when, allowing brands to predict demand, optimise fulfillment and to personalise the end to end customer experience. These processes are vital to company success, because even if you’re not using it, your competitors may well be and, increasingly - in a global world of product and fulfilment - experience is the competitive advantage.
In fact, Peak’s own research has found that AI-powered businesses are growing faster than the rest, with 50 percent higher profit margins. The sportswear retailer, Footasylum, is an excellent example of an organisation in this sector using AI and data to supercharge its operations by using innovative data-driven marketing to engage with customers in the right way, at the right time, with the right offer. This resulted in a return on advertising spend that’s 30 times higher than the industry average and 10x higher than their prior best ROAS. Initiatives such as these demonstrate the value and power of business data when harnessed properly to move ahead of the competition.
Meanwhile, research shows that brand leaders plan to hire 50 percent more data scientists in the next three years, as part of the data-led retail revolution. Digital transformation is a hot topic within the industry, as every retailer strives to find an edge that could put them ahead of the pack. This investment in hiring data scientists shows a recognition for the direction retail is moving in, as strategy becomes increasingly informed by the information brands can garner.
Accurate supply and demand
Data is valuable to retailers in more ways than just marketing and customer acquisition; it can also be used to help manage supply and demand. Using AI, retailers can use data to forecast demand, in real-time, to inform business decisions. For example, this can help keep inventories stocked to ideal levels and ensure fulfillment of customer orders in the most efficient way. Analytics-based demand forecasting is more accurate than other approaches; minimising money tied up in stock and avoiding shortages during periods of high demand. AI systems can even suggest actions to combat any hidden inefficiencies that are found. This ensures that retailers can efficiently meet the demands of their customers in a cost-effective way.
By utilising data and AI across the value chain, a retailer is avoiding data silos, and removing the risk of optimising one area of the business to the detriment of another e.g. aggressively marketing a product that isn’t deeply stocked only to sell out quickly as demand wasn’t planned in alignment with supply thus missing sales opportunities. This interconnected AI is the next generation of what retailers are starting to do with data and AI today, and those that get this right will not only future proof their business but also be a market leader for years to come.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 5.
Instrumentation Technologies and Noesis.Network step up to the challenge and design compelling IoT solutions for London and Glasgow.
Itron, Inc. has revealed that Instrumentation Technologies (I-Tech) and Noesis.Network were selected as the winners of the inaugural Itron Smart City Challenge. The Smart City Challenge invited IoT solution providers to compete for the chance to work with Glasgow and the City of London to deploy breakthrough solutions to address specific challenges defined by the cities.
The Smart City Challenge showcases how Itron is enabling cities and technology innovators to work together to solve problems and improve citizen wellbeing. Using Itron’s standards-based developer tools, the winning companies created integrated IoT solutions that leverage Itron’s IoT networks in the Cities of Glasgow and London. As winners, I-Tech and Noesis will continue to collaborate with the cities to progress their breakthrough solution. Itron will continue to support the winners to help them complete development and bring their solutions to market.
Glasgow Challenge
As one of the world’s top sporting cities and a major destination for conferences and concerts, the City of Glasgow frequently attracts large crowds of visitors. However, this increased population can create significant challenges for the City, for visitors and residents alike regarding public transit, traffic, noise and safety. For the challenge, Glasgow asked for solutions to help improve the experience of residents highly-populated during events while elevating the City’s profile as a cultural destination for tourists.
The winning solution from Noesis features cost-effective acoustic sensors to address noise pollution and mitigate traffic. Noesis proposes deploying sensors on lampposts in areas with anticipated noise pollution from events and related traffic to identify, localise and quantify noise. The distributed network of noise sensors gathers highly reliable and accurate data, including noise source, location, sound profile and power level. With this data, Glasgow can have unprecedented visibility to acoustic data around event venues to reduce noise pollution. The acoustic sensors can be upgraded over-the-air to support future use cases such as traffic management and public safety.
Finalists for the Glasgow Challenge included:
· Koya Digital: Utilised IoT and AI to monitor, predict and influence, the flow of large groups of people travelling through urban environments.
· TerraGo Technologies (Glasgow): On-demand traffic signal control to improve mobility around high-traffic events
London Challenge
Due to the life-threatening nature of people entering the River Thames, the City of London sought solutions to improve river safety and address public health priorities. The City of London wanted solutions to protect citizens by identifying entries into the river, ensuring the availability of safety equipment when it is needed, and accelerating emergency response times. London Challenge winner I-Tech designed a two-step solution to allow London to monitor lifebelts and pinpoint the location of a person in need of rescue support.
The first step of the solution suggests the deployment of small, battery-powered devices that will monitor the lifebelts. To prevent misuse, the device will sound a high-pitched alarm if a lifebelt is removed. If the lifebelt is not placed back in its housing unit for eight seconds, an emergency message will be sent via the Itron IoT network to notify emergency services. The second part of the solution is a jumper detection system that uses an optical scanner to identify when people fall from the bridge and to track their precise location to assist first responders in search and rescue. I-Tech carefully designed the solution to operate effectively even in thick fog and uses advanced data processing to ensure the lasers are detecting people instead of other objects such as birds or falling objects.
Finalists for the London Challenge included:
· Cyient: 3D GIS platform with data model geotagged to every device, including hydrophones, HD cameras and drones
· Noesis: Acoustic sensors identify and localize entries
· TerraGo Technologies: On-demand search lighting for authorised emergency personnel
· UniqueID: Connected lifebuoys to detect emergency or misuse, and blinking public lights to help the Coastguard rapidly identify the exact emergency location
“The inaugural Itron Smart City Challenge shows how we are applying technology for a purpose. While these solutions were purpose-built to address specific needs defined by the sponsoring cities, river safety and noise pollution are common concerns for cities worldwide. We invite cities from around the world to collaborate with Itron to launch the next set of open innovation challenges.”
- Itai Dadon, director of smart cities and IoT at Itron
“Since Glasgow is a major destination for conferences and concerts, we want to reduce noise pollution and traffic to ensure an optimal travel experience for visitors. Through our participation in the Itron Smart City Challenge, we found a solution that will easily connect to our existing network and address our concerns.”
- Duncan Booker, Chief Resilience Officer, City of Glasgow
“As a leading IP Core provider excelling in innovative hardware design, Noesis is thrilled to develop a high-quality product for Glasgow that harnesses the power of IoT. For this challenge, we designed a solution that will utilise wireless networks to create real-time, highly granular sound maps with our acoustic sensors. It is an honour to have our solution selected by Glasgow.”
- Kees Den Hollander, Chief Commercial Officer, Noesis Network BV
“The Itron Smart City Challenge gave us an incredible opportunity to seek a creative solution to meet our safety and public health priorities. With I-Tech’s solution, we will be able shorten response times in dangerous situations to improve safety for our citizens. We look forward to implementing this solution, which could be replicated in cities around the world.”
- Giles Radford, Highways Manager, Department of the Built Environment at the City of London Corporation
“This challenge was a great opportunity to for us. We demonstrated our capability to study the problem and then deliver a customised solution that enhances Itron’s IoT networks with our innovative, custom built sensing electronics. Utilising Itron’s developer tools, we developed a solution that will enhance safety for London’s citizens. We are pleased that London selected our solution, which we believe represents the future of emergency response services.”
- Uros Dragonja, Solutions Architect at Instrumentation Technologies
The network is the backbone of almost every organisation today. When it is not available productivity falls, the business loses money and its reputation suffers. Typically, the network and its efficient operation is fundamental to the organisation’s success. And yet trends like remote working and virtualisation, while they help drive business flexibility and productivity, may also make the network more vulnerable.
By Alan Stewart-Brown, VP of EMEA Opengear.
As the IT industry has become more virtualised, with the ongoing migration to the cloud, the emergence of the Industrial Internet of Things and the rise of connectivity, the network becomes more complex and difficult to manage. As more people decide to work from home or to connect remotely while off-site, it becomes more dispersed. Taken together, these developments make it more important that the network is kept up and running but also more likely that there will be outages.
Businesses are adding layers of complexity to networks and that can bring vulnerabilities. Today, we are seeing a raft of factors that can cause network or system outages – from ISP carrier issues to fibre cuts to simple human error. Added to this, network devices are becoming ever-more complex. As software stacks require more frequent updates, they become more vulnerable to bugs, exploits and cyber-attacks and all that in itself leads to more outages.
For all these reasons, we are seeing a growing focus on the concept of network resilience – but what exactly do we mean by this, why does it matter and how can it best be achieved? Network resilience is the ability to withstand and recover from a disruption of service.1 One way of measuring it is how quickly the business can get up and running again at normal capacity following an outage.
Network resilience is unfortunately often confused with redundancy. Organisations sometimes think that if they put two boxes in the core or the edge rather than one, they have solved their problem. Really though, they are just moving it somewhere else. A redundant system duplicates some network elements so that if one path fails another can be used. It removes a single point of failure but resilience considers the full ecosystem from core to edge.
Yet, despite this, many organisations still neglect to consider resilience when designing and building their networks. Unless they have just experienced an outage, they may not appreciate the importance of resilience or assign sufficient resources to it. Moreover, few businesses have the necessary in-house expertise to design a resilient network from the outset.
Something like Out of Band (OOB), for example, is likely to always be a small part, in the network design phase at least, of a much larger project. There is a process of education to take place here of course as organisations that included resilience in their network from the outset save time and money for their business by having that capability in there from the start rather than having to implement it reactively after the event.
The fact is that many organisations today face issues in being able to quickly identify and remediate reliability or resilience issues. Take a large organisation with a Network Operations Centre (NOC). They are lots of branches and offices often in different continents around the world with the attendant time zone issues that this typically brings. Often, they are trying to do more with less, so they may have fewer technical staff based at these remote sites. As a result, they may struggle to get visibility that an outage has even occurred because they are not proactively notified if something goes offline. Even when they are aware, it may be difficult to understand which piece of equipment at which specific location has a problem if nobody is on site to physically look.
True network resilience is not just about providing resilience to a single piece of equipment whether that be a router or a core switch for example; in a global economy it is important that any such solution can plug into all of the equipment at a data centre or edge site, map it and establish what is online and offline at any given time and importantly wherever in the world it is located.
That enables a system reboot to be quickly carried out remotely. If that does not work, it might well be that an issue with a software update is the root of the problem. With the latest smart out-of-band devices this can be readily addressed, because an image of the core equipment and its configuration, whether it be a switch or a router for example, can be retained, and the device can be quickly rebuilt remotely without the need for sending somebody on site. In the event of an outage, it is therefore possible to deliver network resilience via failover to cellular, while the original fault is being remotely addressed, enabling the business to keep up and running even while the primary network is down.
Building in resiliency through the OOB approach does cost money, of course, but it also pays for itself over the long-term. You might only use it a couple of times a year, say – but when you need it, you really need it. Of course anyone that has just suffered a network outage will understand the benefits of OOB, as a way of keeping their business running in what is effectively an emergency but as referenced above it is likely to be much better to plan for resilience from the word go. After all networks are the fundamental ‘backbone’ to the success of almost every organisation today, and many businesses will benefit from bringing network resilience into the heart of their approach right from the very outset.
1. Ray A. Rothrock, Digital Resilience (AMACOM, 2018)
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 6.
The definition of the ‘smart city’ continues to elude us. In fact, I would argue the term smart city is by definition a contradiction, says Ben Storey, Director of Marketing at architecture platform, Archiboo.
Cities are, by their nature, organic, a collection of hyperlocal centres. From the outside, we may see them as one entity, but once explored you see just how disjointed a city truly is. In Amsterdam, a city desperate to claim the smart city title (and with its own definition of what that means), we have countless districts - each with their own distinct personality - whether the scenic Jordaan, the fast-developing Noord or the melting pot of De Pijp. These are all areas with their unique appeal, which have organically evolved over time. In fact, for many city-dwellers like myself - this diversity is what we love, and it’s certainly what helps give it its status as a centre of culture.
Yet ‘smart cities’ are too often discussed as the antithesis of this, in a manner that implies amalgamation and standardization. Perhaps too many conversations focus on efficiency and simplicity. This narrative is mostly driven by tech companies who are used to pushing one dream: We will make your life easier. But we don’t live in cities just for ease - we reside here for the energy, the character, the vibrancy, and for the community.
This tech-focused approach is why we’ve ended up with cities such as Songdo in South Korea. Despite boasting many technological efficiencies it’s been described as having “a Chernobyl-like emptiness” and “characterless”.
Yet, the term smart cities continues to persevere. How? The genius of the term smart cities is that it taps into our desire to always have the best phone, a nicer car, a larger home. But as architect Rem Koolhaas points out: “This transfer of authority has been achieved in a clever way by calling their city smart—and by calling it smart, our city is condemned to being stupid.” So, despite this contradiction, we fight on - looking to disrupt our idea of the city overnight - as we did with the smartphone or the shopping experience.
At the risk of generalising, technology firms are far too used to becoming the sole authority. They’re not very good at sharing the platform.
This is where architects come in. Cities thrive when they have creativity at their heart, when their formation feels natural - and this is what architects can strive for. Ultimately architects have understood that the human experience requires intuitiveness and ease, coupled with self expression and identity.
Architects are often the gatekeepers as well. Digital transformation is essential on a government and infrastructure level to ensure that the groundwork is laid, but it can only go so far without proper integration with the architectural world. Architects build the homes, the work spaces, the shopping centres - and it is these buildings that truly have the capability to implement innovative and complete IoT solutions.
The problem is, coming from archiboo, a platform which focuses on bringing together architecture and innovation - architects aren’t very good at sharing the platform either! Architects can be cynical of the smart city because they fear a future of copy and paste buildings and sterile urban environments.
We’ve seen businesses like Arup, HOK, and UNStudio leading the charge when it comes to discussing technology. And many boutique practices have also tried to own this subject matter - moving at breakneck speeds. Bridges are being built from both sides. However, if you look at most smart city conferences you’ll struggle to find many architects in attendance - and if you visit many architecture events you’ll rarely hear the term smart city mentioned. There is still a chasm to cross.
Ultimately the smart city can only exist once architects and smart city innovators can effectively collaborate and agree on how to balance the human need for identity and the city-level need for data and efficiency. Without architects smart cities will be destined to be soulless, but without technology they will be destitute.
“Despite widespread global investment in the smart city concept – worldwide spend on smart city initiatives is anticipated to total more than £76 billion in 2019 – smart cities have yet to realise their vast potential in the UK. But why is this?
“Digital transformation naturally has a major role to play in transforming our current urban infrastructure, and the solutions to make the smart city a reality are arguably already out there – and improving all the time. However, to create a truly smart city we need to look at how we can make best use of smart solutions to provide skeptical stakeholders with demonstratable proof of concept.
“The current lack of uptake can be attributed at least partially to low consumer awareness. Indeed, more than two thirds (68%) of UK residents don’t even know what a smart city is as it stands. What’s more, this lack of awareness is actually damaging consumer perceptions of the concept – over a quarter (26%) of people admitted that they find smart cities “worrying” due to a lack of available information on the topic.
“The key to changing public perceptions of the smart city for the better lies in more than raising awareness; we must also do more to prove the ways in which smart cities could provide clear, tangible improvements to our quality of life. Particularly crucial is solving the issues that have unfortunately become an accepted part of living in a city, such as traffic congestion, security and safety (or lack thereof).
“Smarter traffic control measures – an extremely popular concept among those we surveyed – are a particular area which, if implemented properly, could provide proof of concept to skeptical consumers. Programmable smart barriers can be used to control traffic, and smart traffic lights which respond to the volume of traffic on the road in real-time are both examples of smart city concepts that could be used to drive efficiency and drastically cut commuter times across the board.
“Interestingly, we also found that almost a quarter (24%) of consumers would be willing to fund smart solutions through their tax contributions. Given that funding is another recurrent barrier to smart city adoption, this further underlines the importance of providing substantial proof of concept.
“Local authority bodies, which have a tremendous amount of influence on which smart city solutions are adopted within their respective cities, need also be convinced. Implementing the digital transformation initiatives necessary to create smart cities are expensive – project costs can often run into the billions – meaning it’s crucial to convince those responsible for allocating public funding exactly what the smart city can offer.
“Simply enough, while digital transformation naturally has an integral role to play in making the smart city possible, what will really make it a reality is ensuring that the clear benefits smart cities offer can be articulated to key stakeholders in a clear, measurable way.”
The promise of future cities and buildings built around a smart vision to reduce waste, drive efficiencies and optimise resources is a prodigious one, according to Steven Kenny, Industry Liaison, Architecture and Engineering at Axis Communications.
With a growing demand for businesses and governments around the world to deliver significant improvements in the way our cities and the buildings within them are managed, there are many challenges to be considered, and not least, security.
At the core, smart technology enables the collection and analysis of data to create actionable and automated events that will streamline operations. For this to be delivered at a far greater scale, we must move away from integration and into device interoperability. However, with this comes a host of challenges that must be considered for smart cities and buildings to deliver the desired outcomes whilst addressing the growing concerns and challenges around privacy and cybersecurity.
In order for different technologies to interact, there must be a means for them to communicate, without compromising security. The proliferation of IoT devices has witnessed in parallel an exponential increase in the number of threat exposures and attack vectors, that put in jeopardy the systems that our smart cities and buildings will rely on.
With an ever-increasing number of cyber breaches and a common acknowledgment that you are only as strong as your weakest link, it is important that cybersecurity is considered and evaluated throughout the whole supply chain to protect data, maintain privacy and keep risk associated with cyber threats to a minimum. This process should always start by looking at device security and the vendors’ cyber maturity.
The associated disruption as a result of a cybersecurity breach of a smart system could be catastrophic. At a minimum, it would cause system downtime and impact its ability to operate. The loss of personal data or IP may also damage the brand, impact a company’s share price or even cause actual physical harm. Ensuring that converged security becomes a vital component of this rapidly changing paradigm is of critical importance; safety and security must be at the heart of our shared ambitions for a smarter environment.
By Jukka Virkkunen, Co-Founding Partner, Digital Workforce.
Robotic Process Automation (RPA) is moving into the mainstream. At the end of 2018, Gartner estimated that global RPA expenditure would reach $680 million by the end of year, marking a year-on-year increase of 57%. What’s more is that the analyst house also predicted that by the end of 2022, RPA expenditure would hit $2.4 billion - and the picture in Europe is no different. One study conducted by Information Service Group (ISG) looking at the state of RPA adoption across Europe estimated that by 2020, 92% of European businesses will have adopted RPA to some extent.
But amongst all the hype, automation technologies, including both RPA and artificial intelligence, have also been met with some caution, with many European organisations still uncertain on how to overcome some of the barriers to RPA implementation and struggling to get their RPA initiatives up and running. These barriers include:
● Getting staff on board with automation initiatives.
● Lack of budget to begin RPA implementation
● Choosing the right processes to start automation
This article will look at some of the ways in which European businesses have overcome these challenges and have been able to unlock the true value of RPA in doing so.
Realising the value of automation to the workforce
It has been commonly misconstrued that automation technologies, such as RPA and artificial intelligence (AI), have been designed to replace humans and the inefficiencies they are known to produce. In its study of European RPA adoption, ISG found that one of the most pertinent barriers to the growth of RPA on the continent was ‘organisational resistance to change’. 33% of the European business leaders surveyed cited this as the main obstacle to expanding RPA use in their organisation.
In reality, the opposite is true. The best business results are achieved when people, AI and robots are used in conjunction with one another, complementing each other’s capabilities. Automation is about enabling humans to fulfill purposeful work; work that is creative, innovative and strategic. On the other hand, AI and robots are best off put to use for mundane, data-intense and repetitive tasks.
A great use case of how this enablement relationship works in practice can be seen by looking at the example of the Finnish healthcare system. A recent study of nine healthcare districts in Finland found that half of the work time of the healthcare professionals surveyed was spent on computer-based knowledge tasks, and away from patient care. However, the study found that automating certain data-intense processes had the potential to save nurses across the nine districts an average of 31% of their on-shift time, while doctors could save an even more impressive 34%.
In this instance, implementing RPA enabled purposeful work by giving the doctors and nurses more time to fulfill their specialist tasks. This not only increased each individual district’s overall productivity, but also enabled each district to deliver better patient care. Furthermore, the study also found that by giving the doctors and nurses more time to focus on their specialist tasks, implementing RPA actually led to an average increase in their job satisfaction.
Working with a limited budget
Another of the main obstacles to European RPA growth identified in the ISG report was limited budget for RPA projects. One third of all the business leaders surveyed identified having a restricted budget as being the main reason their organisation would not be able to expand their use of RPA. This is where implementing a robot-as-a-service model could make a massive difference. The robot-as-a-service model is the latest iteration of the software-as-a-service model, where businesses only pay for what they need, meaning they can scale up and down quickly cost-effectively. The model provides organisations with all the tools they need to begin implementing digital workers, while also being the best way to scale RPA projects to meet budgetary requirements.
Amongst the various sectors looking to implement RPA across Europe, manufacturing companies have been quick to adopt the robot-as-a-service model as it allows them to begin implementing automation with small-scale projects and then scale up through their operation. Norsk Stål, Norway’s leading steel and metal provider, is one such example. The company began by automating its Sales Draft process, which included evaluating the amount of material needed to fulfill a customer order. The service was managed on a robot-as-a-service model and has led to the company scaling up by automating a planning production request.
Choosing the right processes for automation
However, in order for the robot-as-a-service model to work as effectively as possible, it is essential that businesses choose the right processes to automate. Implementing RPA must be aligned with the company’s overall vision. Born-digital companies operate this way automatically, but for many older businesses the regeneration process is too slow. Research has indicated that one of the leading causes of failed automation initiatives has been due to a poor choice of pilot processes. One report from The Shared Services & Outsourcing Network found that 38% of automation initiatives that did not live up to client expectations were due to the wrong processes being chosen for automation. Choosing to automate a specific process should only be done to try and gain a strategic business advantage and should come with defined business objectives and metrics for success.
Many businesses have experience of one-off, single and ad-hoc automation projects without a clear connection to the company’s vision and strategy, and these projects can end up being very expensive. Automation initiatives must be managed and developed holistically, not individually to get the best possible business results, and they must be aligned to the strategy in order to unlock the potential business value of RPA.
Nordea, the Nordic’s largest financial services group, overcame this challenge by giving all oversight of its automation projects to their Centre of Excellence, a group of expert teams that together cover all the bases needed for successful automation implementation. They developed a clear automation pipeline strategy to ensure all of the company’s projects met the overall organisational objectives and are run as efficiently as possible.
RPA has the potential to drive business productivity and increase organisational efficiency while also working to reduce cost and improve the nature of human jobs - but it must be implemented correctly to do so. This means that businesses must onboard their employees so that they understand the true value of the automation project, they must choose the right processes to automate based on their business objectives, and they must ensure that they can scale their automation initiatives to meet their every requirement. Reaching industrial level benefit is a totally different challenge compared to the benefits achieved for individual pilots. For this reason, It is crucial that automation initiatives are supported by management.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 7.
Asks Dan Bladen, CEO and co-founder of Chargifi.
The true measure of a smart city will be in its ability to collaborate, particularly to open a lifeline to start-ups who are often those with the vision to take city services to the next level.
A true smart city relies on its developers being citizen, not technology led. Shifting the focus to providing a benefit to citizens – whether they are tourists or residents, old or young – will drive the delivery of intuitive, seamless solutions. With that, the arrival of the 5G network will give a critical boost to the Internet of Things (IoT) which brings together billions of connected smart objects and will make the transition of data smoother. This will not only allow private and public sector organisations to harness the power of real-time analytics, but datasets focusing on transport, energy, health, safety and public spaces will deliver an integrated view of a city’s infrastructure and enable important integrated services to be developed for citizens.
Only when systems work together to form a ‘shared intuition’, will smart cities be definitively ‘smart’. Smart buildings, for example, will only ever be as smart as the interconnectivity between the different types of intelligent services and features. If the ultimate goal of a smart city is to improve quality of life, then IoT applications and new technologies must be flawlessly woven into the fabric of our everyday.
Fundamentally, businesses are looking to improve operational efficiencies and are making investments to ensure there is seamless communication within the ecosystem of business process and consumer interaction.
Individuals, on the other hand, are going to want to move from ‘on-demand’ to ‘orchestration.’ Today a user has to request something to come to them ‘on-demand’ - in a smart city this should be orchestrated for the user. The smart city and the individual’s data contributing to that city should provide the ‘input’ to the system, not the user having to press ‘deliver…
New, data-driven innovations like smart wireless charging, artificial intelligence and geospatial technology will play a key role in delivering the seamless experience consumers have come to expect in everyday life. From phones to drones and electric vehicles, wireless connectivity will be essential for the transformation of global mobility in future smart cities. Connected autonomous vehicles (CAVs) and delivery drones, for example, will not only rely on conveniently located and fast wireless charging points to stay powered, but a strong and reliable smart infrastructure will be crucial for local authorities and transport operators for whom the sharing of data is fundamental to understanding passengers needs and requirements for their services.
Says Andrew Palmer, Consulting Director, Telecoms at CGI UK.
Key infrastructure, including a network of sensors, cameras, wireless devices and data centres, are all required to support the provision of essential services in smart cities. Through their successful implementation, these can work to increase service deployments and efficiencies, and ultimately facilitate the optimal use of resources. In particular, a network of sensors can control, detect and manage superfluous energy usage and make any necessary adjustments depending on needs. Citizens can subsequently benefit from being informed about when and where to make savings regarding energy, time and cost. Indeed, by adding Intelligent Automation, Advanced Analytics and Machine Learning to interpret and act upon the data gathered from these sensors, services supplied and resources consumed could benefit from predictive and then cognitive maintenance models, rather than purely reactive models. This would evolve to marry together the concepts of smart grids, demand-side management and energy flexibility to provide a fully programmable or self-organising energy ecosystem that is self-healing and able to cope with any demand patterns.
5G networks are critical to delivering this digital transformation. Offering faster speeds and more reliable connections on smartphones and devices, 5G networks are set to power the huge rise in IoT connectivity. In turn, this will provide the infrastructure required to carry huge amounts of data and bring about an increasingly connected, smarter world. An example of this is the introduction of 5G roads. Capturing and transmitting real time data, 5G roads will form part of a more integrated transport system which will have huge benefits for people. Real time alerts on road congestion, traffic accidents and adverse weather conditions could, for example, enable significant improvements in transit planning and road safety. When married to connected cars, the “Vehicle to Anything” (V2X) world means that cars would interact with each other and the smart cities in which they are being driven. This would mean that optimal routes can be given to in-car navigation systems based on traffic conditions or air quality – cars would be directed to ease the flow and reduce jams, which can also reduce pollution from idling engines. The smart city would be able to regulate vehicles in accordance with a pre-determined set of rules, using Intelligent Automation and Machine Learning to ensure key objectives and outcomes are maintained.
Smart Cities are already here, in terms of the amount of data that can already be gathered and processed. The next step is to develop the “City as a Platform” or “CityOS” that will allow the city to interact with internal and external end users, systems and ecosystems in a more integrated and cohesive way. The major barriers, such as existing vertical IT/system silos, have to be removed or mitigated by abstraction layers and APIs, to allow diverse and complex combinations of data to be processed to drive greater insight or automation. Some cities are not going to wait for 5G to be readily-available – they are using their existing infrastructures and IT systems to develop use cases that may not need 5G or conversely, will operate that much better with 5G.
Dave Shuman, Managing Director, Connected Industries and Smart Cities, Cloudera, suggests that:
Perhaps controversially, smart cities are not about the technology itself, but how technology is implemented to solve problems that a city must tackle such as traffic congestion, health and safety, escalating climate change and, overall, the need to improve delivery of citizen services. Rather than simply connecting IoT devices, a truly smart city functions on the basis of a shared data infrastructure that allows an orchestrated delivery of services. The types of applications to fuel these use cases will vary, however, data remains the key underlying asset needed to both improve the way citizens access certain services and public entities deliver value to the public.
Since data fuels the growth of smart cities, it is crucial for governments to invest in data management and data security platforms, advanced analytics, and machine learning in order to address a variety of potential scenarios, ranging from facial recognition to sideroad parking and city planning. In this context, the term “big data” is not about size, but rather finding new life-changing and transformational opportunities to cost-effectively connect various systems and drive real-time data processing to ultimately create safer and more fulfilling environments for citizens.
A major barrier to making a city or public entity smart is that, traditionally, a lot of the data could not be collected or stored, or if collected could not be integrated across systems to drive intelligence. Data integration is the most fundamental ingredient ensuring that a city’s attempts to become an intelligent system of systems doesn’t result in a system of silos such as real-time traffic data, pollution levels, city parking, building systems and security cameras.
For a city or public entity to be truly smart, all of these items have to operate cohesively. A single view of the data requires the capability to integrate a multitude of datasets in a platform that spans on-premise and cloud deployments, guaranteeing the highest levels of governance, compliance and security. The key to managing such a range of data is a capability that allows for both scaling analytic workloads and the preservation of detailed data with unexplored value, as both are vital to future growth potential.
From a practical point of view, when initiating a smart program, cities should start small with a strategy that is specific in the short term and flexible in the long term, knowing that these projects are not independent from one another and functioning in siloes. With each completed project, whether successful or not, cities will lay the foundation to build later projects more easily and at a lower cost. With the average city or public entity having to deal with an increasing amount of data produced by sensors and cameras they already have in place - both structured and unstructured, they will require the ability to collect data at scale (in real-time) and analyze it to derive operational insights, with the view of enabling machine learning to proactively detect anomalies and predict outcomes.
Andy Taylor, Director of Strategy at Cubic Transportation Systems, the company that provides the technology behind London’s Oyster card, explains:
“A smart city is one which combines technology and infrastructure, and is informed by the people who commute, work, shelter and socialise within them. Thanks to these integrated systems, service providers will soon have the opportunity to tap into real-time data, giving them the means to react on-the-fly to the needs of the millions who use public transport.
Historically, in transport, technology has been fairly limited. With so many different transit operators using their own technologies, and holding their own data monopoly, it has often halted a more interconnected and smarter way of travelling. This poses a big issue, especially for large cities that experience high numbers of commuters at any given time. We need our infrastructure to draw wisdom from the environment in-situ to tackle these growing demands.
The role of IoT and big data will be have the biggest impact on smart cities. Helping drive this movement is the fact that more and more organisations are readily adopting cloud-based technologies, making it easier to harness IoT and big data solutions. Transit operators now have cloud back-end systems, which provide the affordable flexibility and scalability needed to adapt to changing needs and manage peak traffic flows or special situations when computing resources are in greatest demand.
Cloud also enables transit operators to capitalise on AI technologies, which has the power to scan masses of data to understand trends and make predictions. As cities become connected and smart, systems will be able to recognise data trends via sensors within public services in real-time, and even respond autonomously. By bringing together multiple systems and data types, from internal and remote sensors and even real-time social media sentiment feeds, operational awareness can be increased to new levels.
People will drive the vision of smart cities through an integrated transportation system. While transitioning into a fully fledged smart city may take some time, we are seeing early steps towards this progression through the concept of Mobility as a Service (MaaS) and its data sharing models.
MaaS, defined as the integration of various forms of transport services into a single mobility service accessible on demand, offers various routes of transport through one application and one payment system. MaaS will offer a stepping stone before a truly smart city is realised, and ensures the network will be primed to take advantage of AI and IoT technologies when they enter the fray.”
Matthieu Francoz, Business Developer Worldwide, Dassault Systèmes, comments:
“Today’s cities are going through a period of profound change. From increasing populations, demand for housing and other amenities as well as tightening budgets, this combination presents an opportunity to review long-term plans for addressing social, economic and environmental challenges to sustainability and resilience of cities.
Smart city initiatives offer the potential to anticipate and plan for more robust urban areas. Providing cities with the technologies to shift to a smart city, enables digital collaboration on stimulating city growth, as well as curating virtual universes to imagine and create sustainable innovations. Facing the complexity of today’s urban challenges, traditional methods and techniques of urban planning and design are outdated. Implementing digital technologies will assist the transformation of urban planning for more liveable cities.
Virtual Singapore is a great example of how the Singapore Government has used virtualisation to help analyse and understand how the city works today to make better predictions for the growth of the city in the future. In the real world, the limits are tested with this sort of analysis, but in the virtual world, the possibilities are endless. Creating a digital twin of the city means that scenarios can be played out, solutions can be evaluated, and the government can understand how certain strategies could be executed and improved in the real world.
Smart city strategies also engage internal and external stakeholders, encouraging systematic and collaborative thinking to manage the cities potential growth and bring innovation to the forefront of its initiatives.
Technology will continue to play an instrumental role in city transformation. As the landscape continues to change, smart cities will become vital in helping to overcome the complexity of city ecosystems. This approach will lead to a reimagining of the entire discipline of architecture. At the heart of it, we must also consider how this will benefit the inhabitants of the city. Technologies must help drive not only on the end result but support the development of a city that continually engages with and responds to its citizen’s needs. We no longer have to imagine what the future city may look like, but instead we can drive what it should start to look like.”
“Everyone is entitled to his own opinion, but not his own facts.”
This bold decree from former United States senator Daniel Patrick Moynihan may have been stated decades ago in a political context, but it remains surprisingly relevant and applicable in today’s digital age in relation to many organisations’ data environments.
By Gary Chitan, Head of Enterprise Data Intelligence Software Sales, UK and Ireland, ASG Technologies.
That is, as many large organisations still try to discover data manually, organisations may have an opinion about what their data landscape looks like, but they are not dealing with complete facts. They lack the full metadata picture.
This gap exists because they are using solutions that are either completely manual, or hybrids of manual and partially-automated tools. With data growth exponential and regulatory pressures ramping up, this approach is no longer sustainable.
Why Data Discovery Matters More Than Ever
Data discovery—which involves the collection and analysis of data across the organisation to enable firms to understand their data, gain insights from it and then act on those insights—is a growing corporate priority. It is a trend driven, in large part, by just how much volume today’s enterprises amass.
Yet comprehensive data discovery is extremely challenging to achieve. Where information has been created and managed is hard to keep track of, amid acquisition, myriad legacy technologies and a business environment where employees rarely stay in one position or company for several years.
Taken all together, this means that many organisations do not have a handle on what data they have at their disposal—and what they do not. They may think they do, but do not know for fact, which creates a serious issue, starting at the very top of the organisation. Most senior decision-makers today are focused on transforming their businesses and adapting to changing market environments. Moreover, they are also often asked to produce certain information through regulatory need and shareholder request. Regardless of the source of the need, these leaders must ensure that they have accurate data at hand.
The Risks of Manual Data Discovery
The reality is that most organisations today are still using manual methods of data discovery. While they might have the capability to generate an automated report from the CRM system, for example, automation remains largely in pockets. Few businesses have automated the entire data discovery process end-to-end.
As a result, they typically do not have an end-to-end view of how their data is moving across the enterprise, nor if and how it is transforming along the way. Moreover, most manual data discovery processes are cumbersome, requiring a great deal of man-hours to pull relevant information, which often results in it being out-of-date as soon as it is created. Businesses may put a lot of work into data discovery, but when done manually, its value starts to diminish as soon as it is finished. Another risk of a manual approach is that businesses can never be certain the data they are reporting to regulators is 100 percent accurate due to human error.
Beyond core compliance risks, businesses must also consider the risks that incomplete data poses to their agility. Without complete information, business may be unable to react quickly enough to opportunities. For example, if a special event cropped up quickly and the business wanted to re-price some merchandise, it would have to manually search for relevant data and trace it through all its systems to change all the pricing and coding. This is likely to be time-consuming and expensive and by the time the manual process is complete, the opportunity may have passed.
The understanding of these risk factors is, in itself, a powerful driver for organisations to migrate from a manual to an automated approach.
Making the Move to Automated Data Discovery
To kick-start the move, proponents of automated data discovery need to first prove the case for it internally. The move to automation will obviously entail investment, so organisations need to start thinking about what business case they can make for it. With digital transformation at the forefront of many organisation’s priorities, this case cannot centre on the avoidance of regulatory fines and reducing risk. It must focus on how the investment is going to be leveraged to help the business evolve. That is, how can automated data discovery help the business become more profitable and achieve its end goals?
In line with that, it is key to determine what data the business needs to understand to help them achieve those goals. They must ask what systems hold that data and what areas of the data are likely to bring them most value, if automated? As more projects and initiatives are automated, businesses can start filling in the gaps by identifying where the information is and what systems and technology it is connected to in order to determine what efforts should be prioritised next.
Technology plays a critical role in helping with this transition. Organisations need to span the metadata regardless of the technology it is in to see what transformations are taking place, even down to the code level. Then they need to bring that back into a metadata hub where they can visually display the end-to-end flow of physical data through underlying applications, services and data stores. That flow of data then informs the application layer and it is at this stage that technology vendors can start to get the business more involved in the discussion about change management, transformation and compliance. It is here where business leaders’ insights are key, as they understand their systems and applications much better than the deep technical data flows involved.
With that foundation in place, the business can then overlay a data governance approach to better manage change to the evolving data environment. That involves identifying data stewards and data practitioners, while allowing data consumers to access information that is relevant to them. Being able to dynamically map all the data flows and dependencies is also key here as the key first step to getting stakeholders together to discuss the likely impact of any changes to the data environment and make informed decisions as to the best route forward.
The Benefits of Automated Data Discovery
With execution underway, the business benefits begin to come to fruition.
One of the most immediate benefits organisations see is the reduction of man days of effort made in capturing information. Once the system is automated, it runs continuously. It also gives the business the ability to look at and analyse the past—particularly what changes have been made to the environment and what their effects have been. Another benefit is that automated data discovery enables organisations to improve the way that data is visualised. Putting the end-to-end lineage on display allows stakeholders to clearly see how data is moving and transforming. When taken together, these benefits allow the organisation to see not only whether changes have, in fact, improved the environment or made it worse, but also make more informed decisions about future improvements to the environment.
Organisations are also able to more clearly understand if they are possibly missing out on key market opportunities. An automated approach allows them to do this much more efficiently, giving them the understanding of and insight into data they can use to create competitive advantage.
Only when organisations implement new automated systems and consign manual data discovery to the past can they have a fact-based understanding of their landscape and fully thrive in the new age of digital now upon us.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 8.
Chermaine Koo, Project Manager for Foolproof (experience design and digital transformation specialists) at its Singapore office, gives her expert opinion on smart cities being developed around the world, especially in Singapore.
There is a significant surge in global emphasis on the future cities agenda in the recent years. The rapid pace and scale of urbanisation are a cause of concern for future cities’ sustainability and liveability to support the global majority of the urban population.
In Asia, Smart Nation: Singapore’s Smart City project has attracted attention on a global scale. It is the government’s response to the challenges of urban density, energy sustainability and ageing population. Singapore is notable among such programs due to its size and status as both a city-state and nation-state, which allows room for flexibility to become the first smart nation. No surprises that Singapore retained its position since 2016 as first in the recent Asian Digital Transformation Index 2018. Indisputably, the nation’s digital transformation is laudable – considering how it used to be a swamp-filled jungle in 1819!
Singapore’s Cities 2.0 Strategy
Cities 2.0 strategy (or Smart Cities) emerged as a solution for future urban living. Singapore is a crucial node that reflects an ongoing global debate observed on the increasing need for urban renewal in today’s post-industrial society. The increasingly digitally mediated infrastructural systems of cities will, however, translate into ubiquitous implications with the previously ‘dumb' city management, government and its inhabitants’ experience. Singapore expands the interpretation of ‘smart’ into a working definition as successfully placing citizens in the centre of seamless services, which will be achieved through the pervasive usage of data to deliver enhanced services for citizens and businesses.
Smart Nation Platform
The implementation of a smart infrastructure in the Smart Nation Platform (SNP) is a critical capability. It supports the ubiquitous connectivity and big data that forms an essential foundation for the flows of information. It is envisioned by tapping on SNP to enable connectivity which will optimise the city-state. The sensor deployment in the system allows data collection in government agencies meaning they can plan, create and execute citizens centric solutions. Big data is perceived as a magnifying glass that provides higher precision by harnessing algorithms to produce predictive analytics for a fully optimised city-state.
The effectiveness is made possible with the utilisation of real-time awareness and advanced predictive analytics deployed in the system for decision-making process across mobile devices and urban infrastructures. Much of the rhetoric and creation of smart cities technologies revolves around the production of an Internet of Urban Things and urban computing – networked devices, sensors and actuators embedded into the fabric of buildings and infrastructure.
From a technological perspective, the key ingredient is 5G and “the fog”, which allows decentralised communication between devices. Then we need to agree upon protocols to facilitate this communication
Will smart cities be successful?
Smart cities will only be successful when everyone (all citizens who own the right to the city) can access and benefit from the opportunities. The Internet of Urban Things can only do so much if it enables useful participation from the wider populace. It needs to unify and break down silos within the system. Ultimately, one’s right to the (smart) city ultimately belongs to the rich, poor, intellectual and disabled in Smart Nation Singapore.
Chris Shannon, CEO, Fotech Solutions, comments:
When it comes to the evolution of smart cities in Europe, the focus will be on bringing intelligent software solutions to the built environment. This means that technology will be central to both new development initiatives and renovation projects alike. While it’s easier (and cheaper) to integrate smart systems into a new-build than it is to retro-fit, digitalising the existing infrastructure will be essential if cities really are to become smarter.
One of the most important infrastructure initiatives for municipalities is the digitalisation of inner-city transport. Of the $81 billion that the International Data Corporation (IDC) estimates will be poured into smart city initiatives this year, investment in smart transport systems is second only to security projects.
This isn’t particularly surprising, given that transport is a critical component of every major city — impacting everything from property development to the environment and the economy. But it also presents a more immediate concern to governments in terms of mass urbanisation projections, congestion expense and decarbonisation targets.
As a result, transport and traffic management solutions are at the forefront of the majority of smart city projects. For example, in Calgary, we’ve been part of a pilot test that has deployed our state-of-the-art distributed acoustic sensing technology (DAS) to digitalise the city’s roadway and allow the trial of autonomous vehicles for public transport.
Given that the need is to enhance the existing infrastructure, the benefit of the technology is that it ‘plugs-in’ to pre-installed fibre optic cables, converting them into an ecosystem of highly-sensitive, individual vibrational sensors. This provides city authorities with real-time data streams of their entire road systems and enables them to track traffic and public transport infrastructure — allowing them to analyse the speed and density of traffic, pinpoint congestion and identify disruptions.
So, what we’ll see more of as smart cities begin to take shape, is greater investment in and deployment of interconnected, digital technologies that can augment pre-existing infrastructure — such as traffic systems, rail networks, energy and power grids, etc. Cities will need to do this to successfully drive forward their digitalisation strategies.
Gil Bernabeu, Technical Director, GlobalPlatform, explains:
We are connecting the world around us and smart cities are a natural evolution. With 65% of the world’s population projected to live in cities, we’re set to place even more trust in technology to make the urban environment more efficient, sustainable and safe. But with billions of devices – like traffic lights, CCTV cameras and speed limit signs and sensors – going ‘online’ to gather and communicate data, the platform for attacks expands exponentially.
Let’s think about data first. Obviously, the sensitive data being gathered and transmitted by these devices can be hijacked. The hacking of actuators and the signals they send is potentially more troubling, though. Think about a device pinging the wrong speed limit to an autonomous vehicle. The ramifications could be fatal. Now let’s consider the devices themselves, as they can also be a platform for other attacks. One of the largest DDoS attacks in history was launched from a compromised network of baby monitors. Just think what a connected city could do.
For this reason, the many and varied smart city stakeholders must learn from the mistakes of the past. Smart cities must be built upon a foundation of trusted security to ensure the safety of sensitive data, integrity of infrastructure and welfare of citizens. But we are not starting from the beginning here. Proven standards and specifications are freely available to ensure that all devices can interoperate together to ensure seamless operation and experience. This gives service providers and device manufacturers the means to interact when deploying secure digital services, regardless of market or device type. With this collaborative approach we can give those living in smart cities greater simplicity, convenience, security and privacy in their everyday lives.
Andrew Fray, UK MD at Interxion, says:
“Conversations about smart cities are dominated by how artificial intelligence, machine learning and the Internet of Things will revolutionise everything from entertainment to waste management. The concept of a smart city conjures up futuristic images glamorised by sci-fi blockbusters. However, the real ‘smart’ city is less about flying cars and more about devices, sensors and data that will drive efficiency and control, but also provide financial returns. There are a number of obstacles to overcome before smart cities will be fully realised. Many are questioning where the data will be held, how it will need to flow to deliver required service levels and how it will give investors into urban innovation the returns they need. New technology will no doubt play a major part, however, the key ingredient to harness the potential of smart cities will be connectivity.
“Gartner predicts 64 billion connected devices in circulation by 2025, creating a torrent of information between chips, devices, people, systems and organisations. The demand on mobile and optical fibre networks will increase super-exponentially. 5G on its own will not solve everything; speed will be up, capacity will be up, but there is so much more to be figured out. Many predict disaster scenarios, where cities that fail to get things right will be plunged into darkness. However, the reality is that if the products and services don’t connect, businesses will simply go out of business, or fail to launch altogether.
“Data centres will certainly be vital data transit locations and will act as safe havens for businesses looking for a range of connectivity services in order to control the flow of data. They also offer a solution for businesses who need scalability to increase bandwidth based on content demands. Moving everything to the cloud and to the edge will not be easy and, as a result, data centres will need to be highly-connected as well as carrier- and vendor-neutral to prevent lock-in to proprietary solutions. Smart people, building smart cities, will need to make some smart decisions soon about where data will live.”
Johan Herrlin, CEO at Ito World, comments:
“Cities are still reeling from the time when Uber arrived unannounced and started operating without the cities having a say. Since that time, they have greeted new mobility entrants with more caution. Many have attempted to regulate the roll out of new services, such as dockless bike sharing, electric bike sharing, and electric scooters with a patchwork of regulations. LA Department of Transport has done all this and have additionally attempted to harness the power of the data generated by these new mobility services by requiring companies to share their data with the city via a new data format MDS (Mobility Data Specification).
The idea is to require all mobility companies to share this information in a standardised way so that the city can use the aggregated information for better planning, provision of public transport, etc., in effect turning each bike and scooter into a sensor.
While companies are complying (they're obliged to), they have also pushed back with Uber and Lyft leading the way. Other US cities have begun to demand that companies use MDS as a condition of being a data provider, in states such as Nebraska, Ohio, & Tennessee. While this is not yet a "successful smart cities project", it does point to a real-world example of how cities are using sensors (in the form of bikes and scooters) to better inform decision making in the city.”
How to solve the disconnect between manual and digital asset management.
By Mark Gaydos, chief marketing officer, Nlyte.
With the majority (96 per cent) of asset decision makers noting that hardware and software asset control is one of their top five priorities for their business, it comes as a massive shock that a third of those enterprises are still tracking their data assets manually.
This data comes from a Sapio Research investigation of TAM, commissioned by Nlyte Software.
When it comes to technology decision-making, there appears to be a number of barriers leading to inefficient processes and preventing an organisation from obtaining clear control of their entire data assets. Whether this is from a constraint on team budget, or a lack of specific skills, there remains a disconnect between teams pushing for digital innovation, yet still working in an age of out-dated, laborious manual processes - causing them to easily lose control and sight of their full data assets.
We know that the digital landscape is changing and while of course audits have their place, never before have they been so diligent and intense. But what does this mean for IT managers when their data asset is put under the microscope? Moving forwards, teams need a more lightweight, agentless technology solution to remove these headaches.
Audits are but one way for any organisation to re-evaluate and re-configure where their assets sit and get a full picture of their current IT stack. No organisation enjoys the process of a third-party audit, but where self-audits are adhered to, the company can be confident that they are always on top of their game. Being able to comply at speed and under stress is a real gift for IT managers in aiding them to deliver what they need to prosper. Success comes from employing new technologies that provide constant monitoring and updated reporting, in real-time, by automating the audit process.
Keeping track of your stack
IT managers don’t have the easiest task in the organisation, especially when an upcoming audit looms over their heads. They need to be fully on top of their compliance to the moment when hardware and software is added or removed, and stay in constant line with the company’s growth and/or changing circumstances in order to provide the right support at the right time. When you factor in assets from mergers and acquisitions the problem can be even less clear as incomplete documentation, different standards and technologies can cloud the situation.
Keeping track of these assets can be incredibly taxing, especially if teams are still following old processes and inappropriate but ‘free’ tools, such as Excel, to do so. Adding to this, IT managers need to ensure that the software attributed to hardware is also being tracked; otherwise organisations could be caught spending too much of their budget on unnecessary licences, causing the business to lose money, or worse, being under-licenced and face a hefty fine from their providers. With constraints on team budgets already, this can make an IT managers job even harder, and can cause them to potentially damage their relationship trying to make magic happen without the right tools and processes to assist them.
As ever in all things business, and especially when it comes to technology decision-making, the barriers on budgets and lack of specific skills are traditionally what prevent an organisation from gaining total asset control. Businesses need to think long-term at what may happen when they realise they’re no longer in control, and consider whether or not they are one of those businesses where it takes something seismic to happen, such as a breach or major fine, to jolt them into action and put certain procedures in place to prevent it from happening again.
Regaining control before it’s too late
After historically being brought up on Excel to manually track their entire technology stack, it can seem like the easiest and more comfortable option for IT managers to stick with when the letter for their next audit arrives; however, this no longer has to be the case. With 84 per cent of IT managers being at least a little concerned about the prospect of a vendor software audit, there’s now, more than ever, a strong case for businesses to find a new solution that offers a ‘real-time’ view of their entire technology stack - this solution is Technology Asset Management or TAM.
TAM is the practice of managing and optimising, maintaining and disposing of software applications, serving the enterprise by bringing order to the chaotically grown compute infrastructure. It functions as a single source of truth for the business to manage the cost, security, compliance, and the operations of its technology.
This software moves IT managers into the 21st century with automated systems and processes to track software and hardware licenses in a digital environment. In fact, 61 per cent of managers in the same survey claimed that a ‘real-time’ view would enhance IT innovation to support their business goals and, over a third (37%) believe that they could save 20 per cent of their budget by remediating underutilised software. This solution is able to organise the use of each and every license across a plethora of devices, locations, and even regions to understand exactly what is being used, how, and by whom.
The TAM-dream
Technology Asset Management even alleviates the worries around wasted man power by bringing optimisations and automation to the centre of the process and giving time back to all those needed throughout the audit. With a number of organisations having spent over 500 hours managing an audit in the past, it’s an IT managers dream solution to be able to streamline their entire process.
Once an audit has been run, not only can information be recorded for software providers, but also show where cost savings can be made from over-brought and under-used licenses. In this same vein it is also possible to prevent the misuse and/or over-use of software once a colleague or team member leaves the company - if licenses are still being tracked manually there’s a good chance this would be forgotten or missed.
Beyond the core time and money aspects of real-time management, the matter of risk looms large. Adoption and discipline of technologies like TAM needs to be higher across the spectrum for large organisations. As organisations grow they often lose control over their technology assets and gaining this control back does not have to be the laborious process it once was.
It’s impossible to avoid a software audit, and just as hard to run the audit with out-dated tools in the time frames typically given - for any mature and complex organisation. However, preparation and planning can make this process a lot simpler, and using a solution, such as TAM, puts all the information at the touch of a button for any IT manager. It allows informed and strategic choices to be made, as well as more cost-effective decisions that will benefit the longer-term business goals.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 9.
Asks Iain Shearman, MD, KCOM NNS.
The rollout of 5G connectivity will be vital to release the potential of the Internet of Things (IoT) as 5G promises to be up to 30 times faster than current network infrastructure. This new network will allow for seemingly instantaneous, two-way data transfer within the IoT in a smart city environment, connecting billions of smart objects and devices. The fifth-generation network is the gateway to truly smart cities because it will enable connectivity between not just mobile devices but everything to everything (X2X).
A truly smart city is one that puts citizens at the heart of innovation. This means analysing vast amounts of data and using it to improve public and private sector services. Examples from the transport sector alone include the creation of more electric vehicle charging stations in response to citizen demand and improvements in public transport provision to reduce congestion.
However, hardware security concerns have stalled the rollout of the UK’s 5G network and significant upgrades will be required to the current UK network infrastructure before complete rollout of 5G. This will in turn need large amounts of funding as well as time to ensure testing before beginning to integrate. Smart city development requires reliable, seamless fixed networks - without a robust network infrastructure any potential technological advancements will be built on foundations of sand.
Furthermore, the extent to which cities will be connected means security must always be a top priority. More connected devices means more data, so the risk of a security breach is significantly increased and has the potential to be more damaging. A strong and robust security infrastructure is a must-have before smart cities can successfully grow.
Smart cities are gradually becoming a reality around the world, with many European countries leading the way. The biggest challenge for the UK is the diversity of progress in digital transformation within UK organisations. Whilst there are a number of organisations and businesses pioneering digital transformation, many are stuck in a transformation paralysis due to lack of knowledge, funding or poor access to connectivity. It is this lack of consistency that stops mass digital programmes, such as those needed to create a truly smart city, from being successful.
says Ken Figueredo, oneM2M:
Consider the example of waste management, one of the new breeds of a smart city service, where sanitation and emptying of bins is managed through remote connectivity. Fixed workforce schedules evolve into dynamic, just-in-time pick-ups so that citizens aren’t confronted by unsightly, overflowing bins.
The transition will take time - it may take several procurement cycles to install connected bins in public-arenas, on streets and around recreational spaces. However, not everything would be upgraded at once. Through competitive bidding and public procurement rules, different vendors would supply waste bins. Anecdotal evidence indicates that each supplier will use a proprietary connectivity and status reporting solution.
However, this will result in a set of technology and data integration headaches, creating friction for cities that want to make optimal, data-driven decisions. The alternative is to operate several silos and absorb the associated costs. To add to the complication, a city might later choose a new supplier to add more on-street bins, resulting in two systems for on-street bin services.
Standardisation side-steps these problems. A common approach means that a city can connect and communicate with any smart city asset. This could be a bin, an environmental sensor, a speed monitor, or a row of streetlights. Through standardisation, cities can mix-and-match vendor solutions, mitigating the risk of vendor lock-in and fostering a competitive supplier base that lowers prices.
In the future, cities will exchange data across domains. An example is traffic management, environmental monitoring and emergency services. Standardisation will also help cities to overcome barriers in sharing data between city departments. The same principle applies to regional challenges, allowing neighbouring agencies to collaborate and make commuter travel easier.
This is where oneM2M steps in, providing a standard for managing large and diverse populations of connected devices. It is well suited to handle city sensors and the data they generate and is being applied to a variety of smart city services across Europe and in South Korea. Examples include smart parking, social care and building energy management. In the UK, oneM2M underpins a public-private, data marketplace, where municipalities can share their data with smart-city service providers in an economic and controlled way.
The breadth of these examples illustrates the merits of an open-standard approach. It is a strategy that solves near term demands to connect devices, while also serving longer-term requirements to share data across organisations and application silos.
~ The move to smart cities poses a threat to power quality ~
Climate change targets are putting pressure on renewable energy to play a bigger role in powering the world around us. However, can it keep up as we enter the age of the always-on smart city? Here Steve Hughes, managing director of power quality specialist, REO UK, explores how the smart grid could hold the key.
Whether it’s the Kyoto Protocol, the UN Framework Convention on Climate Change, or the dozen or so country-specific programmes around the world, climate change is driving a commitment for energy decarbonisation. This means moving away from fossil fuels and using renewable energy.
Inconsistent
According to the European Wind and Energy Association (EWEA), an average onshore wind turbine can produce over 6 million kilowatt hours (kWh) in a year. That’s enough to power 1,500 average European homes for a full year. This is good, but we need more viable options to store this power or transport it over long distances. Ultimately, we need to get smarter with our power grids.
Smart cities need a smart grid
Smart cities will rely on a smart energy grid which, in turn, will rely on data. Smart metering will become essential in identifying and managing peak demand and consolidating this with renewable systems.
Power companies already know that thousands of people will turn their kettle on during the Coronation Street commercial break. Data collected and analysed via the smart grid will accurately predict when more of the grid is going to be under pressure.
As we all become more eco-friendly this may include key periods in the evening when electric cars are plugged in to charge. This means energy providers will be able to more accurately match the supply and demand of renewable energy, introducing it to the grid as necessary.
Double-edged sword
However, more data means more complexity. The rise of IoT systems and devices that transmit both power and data, often over the same physical lines, can cause problems.
In power grids this can result in the creation of electromagnetic interference (EMI) a phenomenon in power systems that can cause electrical equipment to run inefficiently, consume more energy, and in rare cases damage components and lead to a shorter product life span.
Using smart metering to monitor these problems will help, but smart metering itself will interfere with quality. Modulated signals and data moving across the mains supply will create electrical noise, meaning that any system that is reliant on communications over the mains, such as a smart grid, needs to take precautions.
That’s why we’ve developed the REO Power Line Filter. It filters out extraneous frequencies to protect the optimum communications range of 9-95KHz. By ensuring the signal is not corrupted by the noise that can occur within an installation and across the grid, connected machines operate much more effectively – keeping them smart.
Smart cities are fast becoming a reality, and if we are to embrace them, we need to overcome challenges with next generation energy systems and power grids.
All strong and successful relationships are built on trust. As modern businesses face up to rapidly-evolving challenges and constant disruption, they are also acutely aware of the opportunities that harnessing the power of data can bring.
By Clint Hook, Director of Data Governance, Experian UK&I.
But what if they can’t trust their data? Whether that be due to incomplete, poor quality-control, or simply inaccurate data, companies are increasingly confronted with an array of complex data challenges that can significantly undermine their chances of success.
In fact, Experian research shows businesses are eager to exploit the data they have to make more informed, better decisions to help drive innovation and gain competitive advantage, however, a significant lack of trust is hindering their progress.
Immature data management
The problem is clear, trusted data isn’t a reality for most businesses. Companies told us they suspect that almost a third (29%) of their data is inaccurate, while another third (33%) identified a lack of trust as one of the biggest challenges in attempting to leverage their data.
One of the issues identified is data management. As data grows exponentially, flowing from more and more sources, data management practices have not matured to deal with the higher volume and increased variety of data streams.
This failure to keep ahead of the game has resulted in poor data quality, exacerbating the lack of trust across the organisation, while a level of distrust also impacts the bottom-line.
Data-driven insights and projects are delayed and disrupted when businesses can’t trust their information. Some 95% of organisations see negative impacts from poor data quality control, resulting in wasted resources and additional costs.
These challenges aren’t going to disappear anytime soon, and are likely to be intensified as more digital channels and data assets become available as businesses evolve to meet ever-increasing consumer demands.
Your data landscape
The only way to break this negative-spiral is to build trust. Organisations need to have a better understanding of their data – and their systems - to develop an overarching data strategy. It is crucial that businesses have the right people, processes and technology in place to make sure it’s complete, accurate and reliable.
A good starting point is to carry out a ‘stock take’. After all, how can a business solve its limitations and weaknesses if it hasn’t identified them in the first place? Understanding the root cause of data quality issues enables the organisation to better understand the problems and take actions to remediate these issues.
Developing practices around data remediation and data monitoring will help maintain high levels of quality over time. Those practices help build trust and understanding, which allow you to leverage data across more business initiatives.
To get to this stage, first businesses need to understand what and where their data management weaknesses are.
Your data strategy
One of the reasons data management has remained relatively unsophisticated in recent years, has been the lack-of support from senior management. While the C-suite certainly understands the importance of data, and the insight it can bring, they don’t always invest effectively in its management over time.
One reason for this is that organisations often approach data management from a technology-first perspective, and view it as a finite project. They believe they need to source and purchase a technology, implement it, and then all the management issues will be magically solved.
Unfortunately, there is no silver bullet. Building trusted data that can be leveraged is a long-term project that needs a long-term strategy that looks at how information can be improved and maintained over time.
Quick wins
The problem in achieving this, is some data management projects can take months, or even years, to achieve results. Such a time lag can contribute to wider frustration in the business and lead to reallocation of funding.
To counter this, it is better to start with small projects, cherry picking some easy wins to build momentum and emphasising the importance of the strategy to senior stakeholders.
Start by benchmarking your data quality levels first and use that to demonstrate ongoing improvement. This will create credible reference points that can be leveraged to gain support from less supportive departments.
Improving contact data at the point of capture – customer emails and mailing addresses are often riddled with errors - removing duplicates in your system, and begin to consolidate your records are all avenues which can offer you those quick wins.
While these wins will differ for every business, it’s crucial you have a clear picture of your data landscape to determine the issues which are causing the largest detriment to your business and then identify if there is any way to solve the issue with data management techniques.
An ongoing journey
We are living in an era of data and businesses are aware how it can help them improve operations, customer experience and drive innovation. That is only possible if their data is trusted.
It’s an ongoing journey, but it’s imperative companies place their data management strategy at the heart of their operations if they are serious about harnessing their data and taking the opportunities it brings.
If they don’t, they’ll soon be left behind by competitors who have been more adept at understanding their data, and crucially, helping it inform their decision-making processes in the future.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 10.
Ross Murphy is the CEO of digital publishing company, PageSuite, and custom app developer, SixPorts. He draws on his strong background in emerging technology to explore the possibilities of technology to connect people and help communities thrive.
As technology continues to progress at such a rapid rate, the way in which we live our lives is evolving equally fast. With this in mind, the concept of smart cities has emerged as a hot topic, taking the potential of smart phone and voice tech from a personal to a community scale. In the simplest of terms, a smart city is an urban area that uses sensors and technology to collect data. This data, is then used to improve the lives of residents, and increase the efficiency of services they use or access.
The United Nations has predicted that by 2030, 60% of the world’s population will be living in urban areas. This, combined with an ever-growing population, means that our cities need to adapt in order to improve quality of life for its citizens, as well as addressing existing economic, social and environmental challenges. In order to do this policy makers are looking at how best to utilise emerging technology to streamline services in order to make things more sustainable and efficient for the future.
While the concept of smart cities sounds like something of the future, it is something very much something of the present. Early adopters of smart city technology include Barcelona and Amsterdam, with US cities such as New York, San Francisco and Chicago not far behind them. Barcelona, in particular, has garnered itself a reputation as being a trailblazer for urban innovation. It has already implemented smart technology including sensors to monitor air quality, an LED smart street light system and space vacancy sensors in multi-storey car parks. Closer to home, housing developers and policy makers have seen the light in ensuring new schemes have the tech in place for homes and all the elements that make a new community – from roads and lighting to utilities – to be connected.
This kind of technology is already influencing our day-to-day lives from voice-controlled smart speakers, such as Amazon Echo, to apps allowing us to remotely control our central heating. It allows us to do everyday tasks with increasing ease and with the emergence of smart cities, this will be seen on a wider scale.
There is one thing posing a problem for smart cities – a lack of infrastructure. They require a quick, responsive and stable infrastructure that can handle huge amounts of data efficiently in order to reach their full potential.
With the first phase of 5G launched in the UK in May, we appear to have our answer. 5G promises to offer improved speed, lower latency and greater capacity in comparison to the 3G and 4G networks before it. The potential of 5G is huge, it can open the floodgates for the first truly smart city.
In the future fast connections brought about by 5G will enable so called Internet of Things (IoT) sensors across cities to communicate with one another to aid a variety of different areas of city life including transport, energy management, healthcare and public safety.
Increased wireless connectivity will help to reduce the time people spend waiting for public transport by giving operators data on who will be travelling at any given time, which will allow for dynamic bus routing. 5G sensors monitoring the flow of traffic will allow traffic lights to change their sequence according to where a high concentration of vehicles are. This paired with satellite navigation systems guiding drivers to available parking spaces and away from traffic jams will considerably reduce congestion.
Energy management is becoming increasingly important both from a financial and environmental perspective, and as time goes on even more so. Sensors will be able to make lights dim when there are no people around to save power and reduce costs. 5G will enable us to more accurately monitor our devices that use energy to predict energy usage. Using this we can use smart cities to create more long-term solutions to our energy crisis.
Healthcare technology will enable us to remotely monitor patients, which will help reduce the strain on hospitals and healthcare centres. One of the most pioneering ways in which healthcare will benefit from smart technology in the future is that surgeons will be able to perform operations internationally with the help of video controls.
From home surveillance cameras to flood sensors, public safety will be improved in smart cities. A 5G wireless network will help the emergency services to respond to crimes more quickly and give them more information to help deal with each situation. Video surveillance software will use facial recognition to identify criminal suspects or help to find missing persons. There is also the potential to reduce gun crime, an increasing issue particularly in the US, as 5G sensors will be able to triangulate the location of when a gun is fired in real-time.
While the potential benefits of smart cities are undeniable, there is certainly still a long way to go before we have truly smart societies. However, 5G is certainly going to accelerate the rate at which they emerge. As the world around us changes, we are turning to emerging technology, such as 5G, artificial intelligence (AI) and augmented reality, for the answers.
The real impact that smart cities and 5G will have on society is developing the technology we already use and expanding it on a much wider scale. It will bring together all the different information available to us, for example how to get somewhere, how much something will cost and what roads to avoid, all in one place. If privacy concerns are met the technology has the potential to simplify how we live, taking us from personal convenience to the greater community good.
Mike Hughes, Zone President for UK & Ireland, Schneider Electric, comments:
“Cities make up two percent of the world’s surface but house more than half of the world’s population and consume 75 per cent of energy resources. Meanwhile, the increasingly digital, connected and electric nature of our lives means that we each as individuals have greater energy needs than ever before. Is this pace of growth sustainable for urban centres and cities like London or Paris, let alone megacities? The answer is yes, but only if we make our cities ‘smarter’.
“Incorporating renewables into our energy mix is a vital part of reducing our environmental impact, but their potential is being wasted by our inefficient use of that energy. Modern technologies, smart sensors and services that can help us identify and tackle energy waste can and must help to improve cities’ efficiency, sustainability, and resilience.
“Aging city infrastructures pose connectivity and network management challenges. At the same time, a 24/7 society and a wide array of IoT-enabled devices and electric vehicles (EV) are fuelling greater energy demand. By working collaboratively with both public and private sectors, Schneider Electric has successfully delivered smart city project applications to more than 250 cities worldwide. Projects like these demonstrate that rethinking energy is not only a major enabler of innovation. It powers progress and life.
“As the global population grows and our world and lives become increasingly electrified, creating sustainable cities means creating smart cities, powered by clean energy that is responsibly consumed and saved. The fact is it is far easier to save a unit of energy than it is to create one. The only way we will tackle climate change and create cities fit for the future is by rethinking our relationship with energy as individuals, businesses and nations.”
Craig Stewart, SVP of Product, SnapLogic explains:
Smart cities contain thousands of data points streaming a near limitless amount of data. Whilst there’s no doubt this data is invaluable for a smart city to thrive, it’s only useful if the data can be accessed in real-time and analysed with context.
In order for smart cities to become a reality, there needs to be a constant, seamless flow between the multiple data sources around the city – particularly where the data is needed for analysis and will trigger a resulting action – all without data getting trapped in individual siloes along the way. From a city-wide organisational perspective, in general it would be easy to segregate information by department e.g weather information to the environment department, road usage data to the traffic department, etc. In some ways, this makes sense – after all, who knows better about the challenges of those areas than the departments responsible for them? And this is traditionally how data has been managed, in silos.
However, in a smart city environment, the interdepartmental data intersections and correlated impacts can be numerous and not always obvious, and thus data siloes can result in huge inefficiencies or miscalculations in everything from public transport to waste management to crime detection, making the promised benefits of smart cities a challenge to achieve.
In order to attain true smart city status, data siloes must be abolished. Data points owned and managed by different departments which contribute to the smart city network will need to feed into an uninterrupted data flow. To knock down these barriers and connect the multitude of diverse data points, smart cities must embrace intelligent integration platforms that leverage the transformative power of AI and machine learning. By understanding the data sets and how they are utilised, ML-driven automation can suggest integration points and process recommendations, with speed and accuracy, without requiring heavy lifting from IT teams or data analysts. This, in turn, will improve the time to insight for smart city leaders.
The advantages that smart cities will bring to humanity are limited only by the imagination of city planners. But to fully experience these benefits, cross-city information must be accessed, integrated, and analysed seamlessly.
Anthony Bartolo, Chief Product Officer for Tata Communications, says:
“Over the last ten years, we have witnessed a lot of hype building around the concept of smart cities. Finally all of the key ingredients needed to create a smart city – the sensors, ubiquitous connectivity, cloud and data analytics capabilities and IoT platforms – exist to make this pipedream a reality. As such, countries around the world have been racing to get there first, testing smart city systems in areas such as building management, transport and energy to be able to call their city one of the first truly smart cities. The possibilities for smart cities are boundless. However, the opportunities are dependent on some crucial elements including the connectivity underpinning the IoT and cloud infrastructure.
“Take Bristol for example. The city plans to solve problems such as air pollution and assisted living for the elderly as part of the smart city agenda. Testing with machine-to-machine interaction is taking place too, with companies developing wireless links that enable driverless cars to communicate with smart city infrastructure and to even bring people new entertainment experiences with sensing and video processing capabilities.
asks Daniel Itzigsohn, Senior Director of Technology and Strategy at TEOCO.
Smart cities are a reality today, but also represent an ongoing global project. Sensors are now woven into the fabric of big city life—generating billions of data points from things such as parking meters, streetlights, traffic control points, weather monitors, and security systems. It is also an expanding market, with industry analysts estimating that it will be worth $1,201bn in the next five years.
“Human sensors” will be part of this too. By connecting through social media, citizens are providing real-time on-the-ground information on traffic problems, safety concerns, vandalism, emergency health situations and a host of other issues. The trick will be to separate the nuggets from the noise. This, and other ways in which the smart city will evolve, makes pinning down “when” tricky.
What role will IT and digital transformation technologies have to play?
A vital role—it’s impossible without them. Most important is the ability to collect and analyse enormous amounts of data quickly. Next generation 5G technology will be key to this: Network slicing, a feature of 5G, means that smart city services can run across dedicated “slices” of the network without affecting consumer capacity, which in turn is important if the information from consumers is to get through and contribute to the data.
Most of today’s smart cities are already generating billions of data points, but it’s how they correlate, analyse and share this data from multiple sources, and across teams and organisations, that will turn them into the smarter cities of tomorrow. Unless this data can be effectively collected, the smart city simply isn’t possible.
What are the key ingredients needed to create a smart city?
By integrating data from social media activity and encouraging citizens to use specially created smart city apps to report incidents or flag concerns, the people living and working in a city can truly become its lifeblood; helping public authorities, bus companies, the emergency services, businesses and other organisations work more efficiently, and work together.
But to make this work, cities need to create ‘Command and Control Centers’—essentially the ‘spinal cord’ of a smart city to support law enforcement, public utilities, disaster management, and environmental controls. These centers will emulate the way communications service providers leverage, correlate and analyse the massive amounts of data that on their network. Bringing together smart city performance management, automation and orchestration functionalities makes emergency and disaster response, for example, far more effective.
Delivering high-quality customer experience is a priority for all marketers, but research from Sitecore and Avanade suggests that almost all marketing leaders (95%) feel that their business’ customer experience is in critical need of improvement. One way that marketers can make the necessary improvements is through investing in an effective digital transformation programme.
To make improvements, marketers must be able to analyse data holistically, provide a clear view of each individual customer, and deliver a consistent, personalized experience across channels. However, two thirds of marketers feel that their business is not mature enough when it comes to understanding how personalisation and data analytics work. Therefore, simply implementing this technology is not enough. Marketers must truly understand how to collect, connect, and process data in order to meet their customer experience goals.
Here are four key things for marketers to consider when implementing a digital transformation strategy with customer experience at its heart.
1. Improving integration within the martech stack
A lack of integration across technologies limits the ability to connect data points, and leads to a disconnected experience for customers across channels. In order to get a clear view of customers and treat each them as individuals, this data must be better connected.
Despite this, many organisations are not implementing technology strategically to capture a holistic view of each customer. In fact, 61% admit that their business loses revenue due to disconnects in the martech stack. Instead, they are supporting individual areas of customer experience, which creates siloed data across separate platforms. Ultimately, this will limit how successful customer experience can be. If you are unaware that a customer has already browsed an item on a website when the move to the app, you can’t offer them related items, or build on what you already know about them.
When selecting the technologies that make up your martech stack, it is also important to ensure that the platform used is flexible enough to allow integration with new channels. For example, being able to centralise digital assets into a single platform which can then be accessed and adapted for use across multiple channels is key to targeting customers in the places they use most - be that mobile, online, or in-store. For example, a mobile app should complement the brands website, by drawing on the same products, prices and images as the main e-commerce platform.
2. Implementing a strategic, cloud-first approach
Cloud doesn’t automatically deliver a better customer experience. But in general, cloud platforms are more open, and provide the opportunity to experiment and iterate more often and more quickly. Greater agility comes as you incorporate updates, make changes and try new strategies in a short time frame. In fact, 87% of respondents agreed that a cloud-first approach would aid their delivery of better customer experience.
As a result of effectively implementing cloud platforms, fewer resources should be needed to manage IT infrastructure, and a business can devote more resources to innovative capabilities that differentiate the brand and create competitive advantage. One area of focus could be improving customer experience, through innovation programmes or additional manpower where needed. Although cloud-based cost savings may not be noticeable to customers directly, a quicker online delivery offering facilitated by the cloud will be!
3. AI can help you, but it isn’t the starting point
When applied effectively, AI can enhance marketers’ ability to deliver personalisation at scale. However, most executives identified AI and machine learning as key gaps in their current martech stack, with 79% saying they are yet to implement them.
As many marketers are not yet ready to use AI capabilities, it is important to first work to ensure that the fundamentals of data capture and analysis are in place. This includes ensuring that the most important and relevant data is being captured and analysed, and that the right people and processes are in place to understand these insights. It is also important to set out a clear strategy to ensure data analysis is efficient, and to determine how insights will be applied to deliver personalisation and increased convenience to customers, and.
Once this solid foundation is in place, AI can then be applied to help companies to drive even more effective data analysis and reduce the time to make critical decisions. It can also help to simplify complex tasks such as personalisation and apply data to create more meaningful connections with customers. For marketers, the implementation of effective AI can also offer a release from the mundane and give more time to focus on more creative, fulfilling tasks.
4. A cross-functional company culture will support digital transformation.
Although investing in the right technology systems is important to enable digital transformation, so is fostering a company culture where all teams work together to understand and implement those technologies aligned to business objectives. However, this culture does not exist in many organisations, with over half (53%) of IT staff believing that digital marketing ‘should not be their responsibility’, and 73% of executives seeing a lack of collaboration between marketing and IT. Such disconnects inevitably hinder the organisation’s attempt to achieve martech maturity and deliver enhanced customer experiences.
One way to overcome the lack of communication and alignment is to invest in training and upskilling to ensure that all teams involved in digital transformation have the required knowledge and can form cross-functional teams to deliver greater results. With 59% of marketers surveyed claiming they have not received the IT training necessary to use digital technologies effectively, and 61% of IT executives saying marketing doesn’t receive the necessary training to use their martech stack effectively, this must be a priority.
Furthermore, a positive example of cross-team collaboration must be set at the highest level. With 63% of executives seeing a lack of collaboration between the CIO and CMO, having these two roles working together towards shared goals and objectives is an important place to start. This, in turn, can encourage a more collaborative approach to be developed across the entire business, with teams working towards shared goals and objectives.
Today’s marketers know that improving customer experience is of utmost importance to the success of a business and that effective digital transformation can help to achieve this. This success can be achieved by taking a cloud-first approach to the martech stack, implementing flexible digital platforms and developing a more collaborative company culture. As a result, marketers will be able to develop the holistic, detailed customer view required to offer personalised, tailored, high-quality experiences.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 11.
Assuring security and interoperability via certification remains critical for widespread deployment.
Smart cities are expected to be commonplace within the next 10 years, according to a new poll by Wi-SUN Alliance, a global ecosystem of member companies seeking to accelerate the implementation of open standards-based Field Area Networks (FAN) and the Internet of Things (IoT). Over half of respondents expect to see widespread smart city deployments in 10 or more years, while a third predict 5-10 years. Just 15 per cent expect it in less than 5 years.
However, half cite lack of funds or investment in projects as the biggest challenge currently holding back smart city development. A further 21 per cent point to security and privacy issues, while lack of interoperability (14 per cent) is also seen as a major factor in progressing deployments.
When asked about their specific security concerns, respondents point to data privacy as their biggest worry (37 per cent), while attacks on critical infrastructure (28 per cent) and network vulnerabilities (24 per cent) are also cause for concern. Eleven per cent cite insecure IoT devices.
“It’s interesting to see the timeframe that many of our respondents place on smart city deployment, when in fact smart cities are already here,” according to Phil Beecher, President and CEO of Wi-SUN Alliance. “Smart lighting is being deployed using canopy mesh networks and is already helping to save operational costs through reduced energy consumption and better reliability. These deployments can be used to improve public safety and for additional services such as intelligent transport systems, smart parking and electric vehicle charging stations.
“Certainly security and interoperability remain critical factors in any smart city deployment and one of the reasons why developers and utilities are increasingly specifying Wi-SUN technology as part of a robust, resilient and scalable wireless communications network. As more IoT devices connect to the network, the opportunity for major disruption through security vulnerabilities is increasing all the time, while greater IT/OT (operational technology) convergence, especially in utilities, will increase the risk of attacks on critical infrastructure.”
There are already more than 91 million Wi-SUN capable devices (Navigant Research) awarded globally as utility companies, service providers and city developers deploy new IoT applications and services for smart cities and utilities. Wi-SUN FAN is the network technology behind a number of major smart city projects around the world including City of London, Copenhagen and Glasgow, as well as a growing number of smart utility networks.
In a report by IoT Analytics, Miami was identified as the world’s number one city for connected streetlights, with nearly 500,000 units deployed, supported by Wi-SUN compatible technology. Paris is number two with 280,000 connected streetlights retrofitted across the city with a Wi-SUN compatible mesh network.
Dr. Michelle Supper, Forum Director of The Open Platform 3.0™ Forum at The Open Group, comments:
“In smart cities, electronic sensors are tactically deployed to collect specific data. When connected, these sensors form an Internet of Things (IoT) network and deliver a steady stream of quality data to governments. When analysed, this data can be leveraged to glean fresh and useful insights into the state of the city such as air quality or congestion levels. Equipped with this information, key decision-makers can make informed, forward-looking choices.
Making the transition from ‘traditional city’ to ‘smart city’, however, is difficult. And to allow information to flow freely between systems and cities, using open, non-propriety standards will be crucial. This is why the European Union Research and Innovation Programme has put a large amount of funding into the bIoTope smart city project - part of the Horizon 2020 Programme.
By collaborating with The Open Group, industry players and academic institutions, bIoTope is running a series of cross-domain smart city pilot projects. These smart pilots have been rolled out in Brussels, Lyon, Helsinski, Melbourne and Saint Petersburg and will provide proofs-of-concept for applications such as smart lighting, smart metering, weather monitoring and the management of shared electric vehicles.
The end goal of the bIoTope project is to showcase the benefits of utilising the IoT, such as greater interoperability between smart city systems. In addition, it will provide a framework for privacy and security to guarantee responsible use of data on the IoT.”
Thomas Sieler, the CEO of u-blox, says:
“The UN predicts that in 30 years 70% of the global population will live in cities and that by 2030, there will be 43 megacities (cities with more than 10 million people) around the world, the vast majority in Asia. By 2025, Asia alone will have at least 30 megacities, including Mumbai, India (2015 population of 20.75 million people), Shanghai, China (2015 population of 35.5 million people), Delhi, India (2015 population of 21.8 million people), Tokyo, Japan (2015 population of 38.8 million people) and Seoul, South Korea (2015 population of 25.6 million people).
Already today cities use 65% of the global energy and create 70% of green house gases. So imagine the environmental challenges megacities face …….chaotic and polluting traffic, energy supply not efficiently managed (utilities: electricity, gas and water, etc.). But megacities will also face social challenges such as overcrowded public transport, lack of parking spaces, outdated infrastructure, and old and unsafe buildings and rising crime, as populations increase in close proximity.
Technology is not a panacea but it can be used to make cities smarter. Traffic and transportation can be streamlined in the most efficient and safe way, as millions of people have to commute from A to B, which also means adding intelligence to the infrastructure. By wirelessly connecting a car to the vehicles and roadside infrastructure around it, such as traffic lights, V2X (vehicle-to-everything) technology is about to transform mobility as we know it. With increased awareness of every nearby vehicle, including those that lie out of sight, a car will be able to actively reduce the risk of accidents and improve traffic flow. For instance, traffic managers will be able to leverage data collected by connected Road Side Units (RSUs) to assess and manage traffic in real time to prevent congestion before it forms, reducing CO2 emissions and shortening commutes. Similarly, truck platooning uses direct vehicle‑to‑vehicle (V2V) communication to allow trucks to safely drive in compact convoys, saving fuel and precious space on the road. Autonomous driving will also depend on V2X communication in scenarios in which cameras and other vehicle sensors are unable to detect vehicles. Examples include when vehicles are too far away, out of sight, or when visibility is poor.
Street lighting solutions can help save energy as well as increase safety, if managed in a smarter way. By making street lights “smart”, that is by using sensors as well as wireless and positioning technologies, one can gain considerable savings in costs and environmental resources. For instance, by placing sensors in adaptive street lights, one can transmit data about environmental conditions to control when lights turn on/off and how brightly they shine. Or, on a foggy day, lights could come on earlier than originally programmed and, in more remote areas, they could brighten up from a dimmed power savings mode only when traffic is approaching. Smart street lights also provide a natural connectivity backbone for a range of smart city applications, for instance deployable surveillance for public events where surveillance cameras share data by using the gateway linking the smart lighting system to the cloud.
Energy supplies could be better managed to save costs and optimize load balance, for instance, by using smart meters. 1 billion smart meters will be installed globally between 2019 – 2023, according to ABI research 2018.
Waste management can be improved for better efficiency: the waste haulers will be informed by wireless technnology whether the bins are full and must be emptied. By equipping waste bins with wirelessly connected smart sensors, one can guarantee up-to-date data on a fleet of containers. Waste haulers are then able to monitor fill levels and thus optimize the routes of waste collection, eliminating unnecessary pick-ups. By more efficiently utilizing trucks and containers, haulers can reduce cost through reduced fuel usage and less truck wear and tear, all while improving environmental-friendliness through reduced traffic and fewer greenhouse gas emissions. “
Seth Wiesman, Senior Solutions Architect, Ververica, explains:
There has been significant discussion recently around how Smart City investment can benefit the global economy. A recent report from ESI ThoughtLab found that Smart City initiatives provide “a robust cycle” of economic growth by unlocking savings and attracting businesses, residents, and talent in a designated geographic area. Let’s now take a step back and define Smart Cities first, before delving into the reasons that make stream processing and data analytics one of their core components.
A Smart City is any community that uses different types of automated data collection to supply information that ultimately increases urban efficiency. Data is usually collected from citizens, devices, and assets — and is analyzed to monitor and manage anything from traffic and transportation systems to power plants, water supply networks, waste management, law enforcement, information systems, schools, libraries, hospitals, and other community services. As more and more governments around the world realize the need to modernize aging infrastructure, manage it more effectively, and cope with the increasing populations of megacities worldwide, it is only natural that the International Data Corporation (IDC) predicts that smart city technology spending will reach $135 billion by 2021 from $80 billion in 2016.
In an effort to improve the environmental, financial, and social aspects of urban life, governments are deploying technology-led initiatives in districts or entire megacities around the globe. For instance, in India, funding for the city remodeling of Surat is coming from a dedicated Smart City initiative, part of a national government scheme launched in 2015 across 100 cities. The city is spending close to $400m on projects including the live-tracking of buses, new water treatment plants, solar and biogas generation, and automated LED street lights. In New York City, LinkNYC is a first-of-its-kind communications network that will replace over 7,500 pay phones across the five boroughs with new structures called Links. Each Link will provide super fast, free public Wi-Fi, phone calls, device charging and a tablet for access to city services, maps, and directions.
When it comes to dealing with the massive amounts of data produced and circulated through these networks, what software paradigms can best support Smart City projects to enable improved infrastructure and sustainability? Why should governmental organizations consider adopting strategies based on technologies that allow the processing of data in-flight that is — as soon as it is generated — available for processing and analysis of any Smart City project?
Smart City projects have one common denominator: various connected devices generate data continuously which needs to be analyzed and acted on in real-time, using a robust data processing framework. These projects pose unique challenges when it comes to ingesting, processing and making sense of real-time data: events from connected devices across a geographical area need to be processed with sub-second latency, something that renders traditional databases and ETL (Extract, Transform and Load) pipelines, based on batch data operations, inefficient. Especially in areas with limited connectivity or no network coverage, as well as under extreme weather conditions, data produced by edge devices over cellular networks will most likely arrive out-of-order or late — a paramount challenge for the efficient and timely handling of real-time data in any Smart City project.
Patience is required, says Rob Orr, Executive Director, Virgin Media Business:
“Cities that want to become “smart” should aim to future-proof their infrastructure as much as possible, ensuring their systems can cope with the demands of any technology acquired in future. The key to getting this right is having a partner committed to the city’s digital transformation – both in the short and long-term – that can predict market changes and ensure they are adequately prepared”.
“One of the most pressing demands will be evolving regional and UK-wide telecommunication networks. Clearly, the enormous amount of data required by the new connected city ecosystem will require new networking technology, demanding the intelligent use of limited bandwidth and ensuring that systems are fully protected from cyber-threats.
“To meet these evolving demands, networking technology is going to need to incorporate scalability, agility and security. Scalability and agility meaning a network that can increase in processing power and adjust to new innovations; and security meaning a solution that’s protected by powerful encryption. Fortunately, there are already ways of achieving this; SD-WAN, for example, is a crucial innovation in network technology, which is driving massive changes in network performance and empowering technical managers to visualise and control bandwidth in real-time, quickly responding to change.
“City officials need to take the time to explore the best digital systems and platforms to support the smart city of tomorrow. Putting in place the right technology and infrastructure now will set them up for long-term success.
“The journey to smart cities requires patience, but with intelligent investment and strategic planning, it is absolutely achievable sooner rather than later.”
The impact of artificial intelligence and intelligent automation on businesses across all sectors is clear for everyone to see. Gartner reported at the beginning of this year that 37% of organisations have implemented AI in some form, and PwC found that 71% of executives believe AI will be the business advantage of the future.
By Jose Lazares, VP Product Strategy and Business Management at Intapp.
The legal industry is a great example of where the benefits of AI and intelligent automation are increasingly impacting the day-to-day tasks of individuals as well as the broader business. There is growing awareness, however, that AI technologies within the legal space must be designed with the specific challenges and intricacies of the legal industry in mind. Firms are increasingly seeking AI solutions that are built by subject-matter experts and deliver business outcomes with tangible ROI.
Modernising the firm
The legal sector is, by its very nature, an industry with multiple processes in play across the firm and the client lifecycle. Law firms have historically worked in departmental siloes, resulting in incomplete views into client relationships, duplicate data entry, and manual ad hoc processes that increase work effort and reduce operational effectiveness. This problem is compounded when each of these disparate processes and tasks employs different point-solution technologies and tools that don’t communicate with each other, and which in turn result in greater data fragmentation. According to the results of a survey we conducted last year, 90% of firms believe AI is critical to opportunity identification, cross selling and managing client expectations; however, because of their fragmented data, more than 80% of firms fall well short of making that happen.
Modernisation and innovation are explicit goals for most – if not all – top law firms. But to truly modernise, they need to de-silo the data to make it available where and when it’s needed. For example, the wealth of client and matter data that is collected during the new business intake process could be used to augment an ever-improving profile of the client. Firms need to be able to cross-correlate data from across the firm in order to unearth deep predictive insights that will have the power to unlock entirely new opportunities.
To achieve the desired end state, firms need to aggregate their data under one unified software system, including practice management, client lists, matter types, matter numbers, and knowledge repositories, across the entire lifecycle from client development to client delivery. Firms that can harness all that data and view it holistically will have a distinct competitive advantage.
The broader business impact
Areas where firms commonly employ AI today include eDiscovery, legal project management, and contract review. But AI has the potential to transform the business of law in even bigger ways. Intapp’s recent research report, “Navigating a new reality in the client-empowered era,” showed that law firms are particularly interested in implementing AI technologies for higher-order outcomes such as scoping and pricing of matters to improve budgeting accuracy and identifying target opportunities through relationship mapping.
For firms who plan to invest in AI technologies to bring about such larger outcomes, decision makers should keep in mind the vendor’s approach to AI. Many AI solutions have been created with just one purpose in mind, making it difficult to ensure that the output is used correctly across a business area and not taken out of context. The most effective approach to technology selection is to first holistically examine the business issue and the key process steps that AI can automate, either by reducing human input or by predicting outcomes. Next, identify and evaluate the data that supports the tasks you are seeking to automate. Once you’ve assessed the data and the processes involved, you can then evaluate AI and machine learning technologies that would help solve the issue and ensure real, measurable outcomes.
A great example of this approach is our recent work on leveraging AI for conflicts clearance. One challenge that firms consistently have is the excessive amount of time that lawyers and support staff spend on administrative tasks instead of working on revenue-generating activities. One of our client firms spent an average of 8.2 person-hours on a typical conflicts-clearance effort from initial search to resolution. In our Intapp Conflicts AI pilot program, this firm slashed the time spent on the conflicts process by 74%, to only 2.1 hours. The projected cost savings for this firm will total nearly $10M over four years and will allow their lawyers and staff to focus on client success.
Using AI, firms can sign more new clients, more easily identify prospective business, effectively realise cross-sell potential, and discover opportunities to propose new services to existing clients. The impact that AI can make on the business is determined by the approach firms take. Law firms, by considering the impact on broader business decisions, can provide a clear example for how AI can lead to truly limitless potential.
The legal sector might not have been among the first to feel the full force of digital disruption, but there’s no doubt that it is beginning a rapid transformation globally now.
By Ollie Imoru, Account Director at global and interconnection and data centre company Equinix.
In many ways the sea change can be traced back to the conditions brought about by the banking crisis over a decade ago – it drove some law firms’ clients to build their own in-house legal teams, only referring to external specialists when absolutely necessary. It was coupled with a general demand for charges based on outcomes and services that were more responsive to particular needs.
In parallel, some governments were keen to liberalise the legal services market, and foster competition through lighter regulation. For instance, the UK’s Legal Services Act enabled startups to enter the market with new data-driven operational and business models. They rely on data analytics to fuel artificial intelligence (AI), such as machine learning and deep learning, which in turn enables unprecedented levels of automation.
These new ways of working introduced many capabilities. For example, insights based on data can provide more information, and different kinds of information, for document analysis, say, than was previously possible when solely reliant on human abilities. Also, the more data fed into an AI engine, the more it ‘learns’, and refines its performance and extends its knowledge base.
Technology is key to competing
To succeed, start-ups and established legal firms alike need to understand how to harness technology to help them achieve their strategic business goals and grow sustainably. Global legal firm HFW is based in the City of London, and having been in operation for over 136 years, has already done a great deal of modernising. It was one of the first UK-based companies to look overseas to drive growth and now generates more than 60% of its revenue outside the UK, with international revenue tripling in the past seven years.
Much of that growth was driven by the company transforming its business by separating lawyers and the practice of law from the delivery of legal services by paralegals, who have new capabilities made available to them through technology and automation. In a decade, the firm went from having eight to 19 offices across the Americas, Europe, the Middle East and Asia-Pacific.
To support this, HFW concluded it needed a reliable data-centre platform to standardise its approach to IT globally, to meet today’s considerable challenges, and be ready for new opportunities. The company’s CIO, Chris White, noted: “We all know the technology environment is changing significantly [and] very, very quickly. We hear a lot about AI, machine learning, process automation. For law firms to remain relevant, they need to start working out how they can make use of this technology.”
Close collaboration
Through close collaboration with Equinix, HFW built out its digital edge in key markets including Dubai, London, Hong Kong, Melbourne and Paris on the global Platform Equinix. By distributing its IT this way, the firm gained the scale and reach it needed, and was also to position its IT resources closer to clients and employees, the technology serves to improve their quality of experience (QoE).
As a key piece of this digital edge strategy, HFW deployed Equinix Performance Hubs in select geographic markets, which optimise wide area network (WAN) and peering technologies to provide consistent, direct and secure connections. They avoid inefficient, expensive WAN routes and bottlenecks, and the public internet which has inherent risks and congestion. Reliable, consistent service is especially important to legal firms given the likely sensitivity of the material and large-sized files they need to transfer.
This type of edge strategy, according to analyst house IDC, is crucial in every digital transformation project. Indeed, IDC’s surveys reveal that delivering a better customer experience at the customer location and enabling new digital services, with faster response time and real-time analytics, are among the main drivers for moving intelligence towards the edge of the networks.
Other big advantages
Adopting a distributed architecture can give legal firms other advantages, such as greater data privacy to meet regulatory requirements and general concerns about data sovereignty. For instance, compliance rules such as the European Union’s (EU) General Data Protection Regulation (GDPR) mandate organisations to understand and control where their data resides. By distributing applications through an interconnected network, HFW adheres to data residency and data governance requirements.
It is probable that this type of stringent regulation will be adopted in other parts of the world as concerns about how data is used, and by whom, escalate. In mid-April, the EU’s most senior privacy official urged the US Government to adopt an equivalent to GDPR as a precursor to wider talks between the bloc and Washington for the transatlantic sharing of data by big business.
It also looks likely that in future the legal sector, which currently largely relies on private cloud, will shift to hybrid cloud infrastructure as the technology evolves and becomes widely accepted. The benefits of using hybrid cloud are lower costs and potentially greater scalability of public cloud, balanced with the higher degree of security offered by private cloud where needed. This is something we see more and more companies adopting – utilising Equinix Cloud Exchange Fabric (ECX Fabric) to access major cloud service providers—like Amazon Web Services, Google Cloud Platform, Microsoft Azure and Office 365, and Oracle Cloud—regardless of physical location, via virtual interconnection.
Dealing with multiple edges
As IDC’s Gabriele Roberti pointed out, dealing with multiple edges is not an easy task. But, in a highly regulated industry like legal, edge not only opens the way to new outcome-based business models, enabled by distributed AI capabilities, it also guarantees the highest level of privacy, security and regulatory compliancy without requiring the data to be transferred outside virtual or geographical boundaries. By leveraging Equinix Performance Hubs, legal firms can protect their investment in IT and ensure they are ‘cloud-ready’ when the time comes without having to deal with an “integration tax”.
As changes in the legal sector accelerate, law firms need to think strategically about the way IT can add value to the client experience and become a true differentiator for the businesses.
The July issue of Digitalisation World includes a major focus on smart citites. Much talked about, and already being developed in various parts of the world, what are the key ingredients needed to create a smart city? In particular, what role do IT and digital transformation technologies have to play? Through a mixture of articles and comment pieces, industry experts provide plenty of answers.
Part 12.
A new forecast from the International Data Corporation (IDC) Worldwide Semiannual Smart Cities Spending Guide shows global spending on smart cities initiatives will reach $189.5 billion in 2023. The top priorities for these initiatives will be resilient energy and infrastructure projects followed by data-driven public safety and intelligent transportation. Together, these priority areas will account for more than half of all smart cities spending throughout the 2019-2023 forecast.
"In the latest release of IDC's Worldwide Smart Cities Spending Guide, we expanded the scope of our research to include smart ecosystems, added detail for digital evidence management and smart grids for electricity and gas, and expanded our cities dataset to include over 180 named cities," said Serena Da Rold, program manager in IDC's Customer Insights & Analysis group. "Although smart grid and smart meter investments still represent a large share of spending within smart cities, we see much stronger growth in other areas, related to intelligent transportation and data-driven public safety, as well as platform-related use cases and digital twin, which are increasingly implemented at the core of smart cities projects globally."
The use cases that will experience the most spending over the forecast period are closely aligned with the leading strategic priorities: smart grid, fixed visual surveillance, advanced public transportation, smart outdoor lighting, and intelligent traffic management. These five use cases will account for more than half of all smart cities spending in 2019, although their share will decline somewhat by 2023. The use cases that will see the fastest spending growth over the five-year forecast are vehicle-to-everything (V2X) connectivity, digital twin, and officer wearables.
Singapore will remain the top investor in smart cities initiatives, driven by the Virtual Singapore project. New York City will have the second largest spending total this year, followed by Tokyo and London. Beijing and Shanghai were essentially tied for the number 5 position and spending in all these cities is expected to surpass the $1 billion mark in 2020.
On a regional basis, the United States, Western Europe, and China will account for more than 70% of all smart cities spending throughout the forecast. Japan and the Middle East and Africa (MEA) will experience the fastest growth in smart cities spending with CAGRs of around 21%.
"We are excited to present our continued expansion of this deep dive into the investment priorities of buyers in the urban ecosystem, with more cities added to our database of smart city spending and new forecasts that show the expanded view of smart cities, such as Smart Stadiums and Smart Campuses," said Ruthbea Yesner, vice president of IDC Government Insights and Smart Cities programs. "As our research shows, there is steady growth across the globe in the 34 use cases we have sized and forecast."
Mobile phones – the most familiar piece of technology for virtually everyone in the developed world - can make a huge difference in enterprise as the gateway to the smart city, says Gerry Brennan, CEO, Cloudbooking.
With the development of 5G presumed to underpin the development of smart cities, it will be the mobile phone that acts as an access point into that network.
Workplace software is more accessible than ever on our phones, thanks to the popularity of flexible working. Siri and Cortana will become the voice of – or even make way entirely for – a pocket-PA, acting as a guide through the smart city by combining your workplace data with relevant information on things like traffic, public transport delays, and space availability in your office.
Eventually, this service will be able to pre-empt problems in your working day. Everything from booking the seat next to your team colleagues, to altering your commute route to avoid tube delays, could be done automatically.
This will rely on continued and extensive investment in 5G networks. When networks operating above 6GHz bands become the norm, huge upgrades in latency and speed will enable workplace technologies that aren’t feasible on today’s networks.
It’s then down to individual building owners and businesses to take advantage of these by bringing their office space into the smart building age. New hardware, and investment in the infrastructure of 5G, will see the networks explode beyond the mobile phone, underpinning the urban enterprise of modern cities heading towards a smart city model.
For example, technologies driving workplace analytics will be able to measure more metrics in the office, in far greater volumes than before, in almost real-time. Businesses in the smart city can thereby access a comprehensive view of how buildings are being used and how their employees prefer to work. This holy grail of information will help shape how spaces are offered and managed, as well as enhancing their employee’s experience.
Most employees are keen to put this data to work in improving their work lives. Recent research from Cloudbooking suggests that more than half of employees (52%) are looking for a personalised experience from their workplace tech – they actively want their technology to autonomously react to their individual preferences and this isn’t possible without an element of data capture.
A smart building should make an employee’s experience as seamless and personalised as possible and a smart city is, at its heart, a lot of very smart buildings. If businesses commit to using the new data in this environment to streamline and improve processes, there is huge potential to revolutionise their employee’s experiences.
Designing for the smart life: human factors matters
The ‘soft machine’ powering the world’s smartest cities
A smart switch from technology to people-centred design, is the catalyst for a new era of ‘inclusive smart cities’, where the focus is firmly on the citizen, says Barry Kirby, Smart Cities Champion at the Chartered Institute of Ergonomics & Human Factors.
What constitutes a smart city?
As technology quickly advances, so does much of our surroundings. smart cities and communities is a vision whereby technology is integrated into all the elements of a city, such as its schools, transportation systems, shops, waste management, hospitals, and law enforcement.
When Googling smart cities, a stunning array of new technologies fills our screens. From driverless cars and smart motorways, to greetings from a humanoid robot, and drones serving your morning coffee, our technology-hungry consumer minds are switched on by cool technology. But the big question we must ask is, will these technologies deliver the transformations that we value most in our daily lives?
What are the key ingredients needed to create a smart city?
People and understanding what they want and need! Relegation of the tech-hungry beast dominating smart city design, now opens a new door to an inclusive smart city generation, driven by the communities that will inhabit them. By taking a people-centric approach to smart city design, the local community has influence over the services and technologies procured, ensuring that the city delivers for the people, which fundamentally means it will be a success.
Following the ‘human factors’, designing for people mantra, developers can now foster crucial local community relationships, instrumental in a city’s successful smart transformation.
Trends to watch
Disrupting the industry status-quo
To create liberating living experiences for future societies, a new wave of smart city developers are upending the status quo, challenging the industry’s over-reliance on technological advancements to drive innovation and asking the big question: how do we enable people to live better lives?
Technological tensions
As smart cities develop and gather pace, governments and organisations are expecting to save significant sums of money and provide just-in-time services to residents, as well as providing a new boom in the technology sector. Technology shows has been dominated by ‘smart’ technologies, with focus on the integration of smart home devices.
A true smart city is the integration of a variety of many systems that are currently deployed within a local authority or municipality, according to Steve Austin – Systems architect for UK&I, Signify.
A city will have a lot of different systems collecting valuable data, but what makes a city truly smart is when it’s possible to combine the individual systems into one central system, so those managing it get a fully holistic view. This allows them to make informed, data-driven decisions, instead of having to navigate multiple individual data streams.
A perfect example of this would be smart street lighting. It is possible to have lighting that utilises motion sensors – tracking footfall in the immediate environment and varying the light level depending on people or vehicular movement, which leads to more efficient lighting and can help reduce energy usage. Lighting can also be dynamically controlled when that data is integrated with other information collected by, for example, Brightsites smart poles which offer environmental monitoring. This combined view of data allows city managers to make more informed decisions, that can have a more tangible, positive impact for citizens and businesses.
With the wide use of camera systems in the UK, camera analytics are increasingly being exploited in the development of a smart city. City managers and urban planners recognise that cameras are no longer just about security but can be integrated into other systems - for example, lighting systems where footfall is analysed in certain parts of the city, and tailoring lighting levels needed at different times. An essential area where this integration is really coming to life is with the wider adoption of electric vehicles. Having integrated data-sharing in a city will let citizens track charging point locations, where cars are charging and at what times – letting them adapt plans according to user needs for things like future charging stations, or increasing lighting responsively when a charging point is being utilised.
We’re already starting to see these smart city innovations now. Signify are currently conducting interim pilots with Highways England where interactive dynamic lighting is used to control sections of lighting on the M4 motorway based on the volume of vehicles. Interact City not only allows changing lighting levels based on historical data, but also makes it possible to measure real-time traffic flow and integrate this back into the central lighting system to adapt lighting levels on the road in real time.
What is holding back the arrival of smart cities?
Simply put – investment.
The hardest part for businesses is taking a first step into a sector where people have never really spent money before. Today it’s not easy to join the dots or see immediately what value the combined system data can offer a city – however, many municipalities have shown enthusiasm to do something but are not always certain where to take the first step.
The role of IT and digital transformation technologies is to push boundaries of possibility. There is so much potential in the space of smart cities, but for those with the power to install it, the choice can be overwhelming and actually impede the development of a smart city agenda.
My advice: Put innovation into the market that is already a reality and easy to implement, and watch the cities of the future unfold.
By Steve Hone CEO and Cofounder, The DCA
The theme of this month’s journal is cooling, I would like to thank all the DCA members who submitted articles for inclusion in this edition of the DCA Journal, they provide some thought-provoking insights into an ever-evolving part of our fast moving sector.
Before moving on to the subject of how we continue to keep our IT cool, I wanted to revisit last month’s theme (Energy Provision & Efficiency) as a recent busy weekend demonstrated just how data hungry we have become. At the last minute I had an opportunity to go to the Glastonbury Festival, for the first time in a very long time, this proud uncle was recruited as a roadie by his nephew, the drummer for the Indie band Luna Lake, who performed at the festival for the very first time.
When Glastonbury first started it was £1 to get in and Michael Eavis provided free milk to all 1,500 revellers, that was in the 70’s BTW when photographs were captured on a roll of 35mm film and if you wanted to phone home you had to walk to the nearest call box. This year 200,000 people descended on the sleepy village of Pilton in Somerset and far from being unplugged from the outside world for five days of hedonistic all night partying, Glastonbury 2019 was one of the biggest tech enabled festivals in the world. It all starts with technology the admission process includes barcoded wrist bands and facial recognition checks so there was no jumping over the fence for me this time round!
Glastonbury 2019 claimed to be the first 5G-connected festival ever. EE installed 5G network points across the 900 acre site, which provided anything from 2G to 5G signal strength as well as Wi-Fi wherever you were on Worthy Farm.
In additional to lots of liquid refreshments and food, 70 terabytes of data were consumed by revellers, which to put this in perspective is equivalent to 784 million Instagram posts. An estimated 20 million messages and calls were made, 29 million pictures were taken, and 2.4 million personal videos were uploaded to the cloud and that’s before you include the 30 hours of air time coverage which was viewed by an audience of 6 million either via TV or via live streaming servers. 2020 is the Festivals 50th Anniversary which is forecasted to be even bigger so expect these data hungry revellers to be out in force again next year.
Having been baked to a crisp in the Glastonbury 90ºF sun one thing I definitely needed was cooling down, with plastic bottles banned for the first time and queues for water an hour long, it was just as well I was not a server in a data centre otherwise I would have definitely fallen over!
On the subject of water there continues to be much talk about a sea change in the way we go about cooling our IT hardware moving forward. Some pundits are predicting the end of air-based cooling systems in favour of more thermally superior mediums such as liquids, however, using liquid to cool is nothing new. Whatever your views are on the future, air still occupies about 90% of the market and actually given average density levels, which have not increased anywhere near as quickly as, the same pundits predicted, it looks like air will be around and remain a viable, affordable solution for many years to come.
Having said that, for certain high-density applications logic, as well as physics, dictates at some point you have no choice but to throw your hands up in the air and look to alternatives to cool your IT! This does not mean you have to automatically jump straight into the deep end and go fully immersed, there are in fact lots of options open to you. If I were to predict anything, which is always dangerous, it would be that more commercially facing data centres will start offering consumers a choice of cooling platforms to suit customers’ existing and future compute needs.
Next month we shift focus again towards security covering both physical and down the wire threats and the best practices which need to deployed to mitigate the risks, the copy deadline date is the 24th July, so please send all submission to amandam@dca-global.org
By Alan Beresford, Boden
Since 2017, DCA member EcoCooling has been involved in a ground-breaking pan-European research project (with partners H1 Systems, Fraunhofer IOSB, RISE SICS North and Boden Business Agency). In this article we will provide an update on the exciting results being achieved and what we can expect from the project in the future.
The project objective: To build and research the world’s most energy and cost efficient data centre.
Part of the EU’s Horizon 2020 Innovation and Research programme, Boden Type Data Centre One (BTDC-1) is one of the largest non-commercial data centre research projects ever set up. It is located in Boden, Sweden where there is an abundant supply of renewable and clean hydro-electricity and climate perfect for direct fresh air cooling of data centres.
BTDC-1 has a design capacity of 500kW of IT load. Made up of 3 separate research modules/pods of Open Compute/conventional IT, HPC and ASIC (Application Specific Integrated Circuit) equipment, the EU’s target was to design a data centre with a PUE of less than 1.1 across all of these technologies. This project has already demonstrated PUEs of below 1.02, which we believe is an incredible achievement.
Boden Type One fully populated design
The highly innovative modular building and cooling system was devised to be suitable for all sizes of data centres. By using these construction, cooling and operation techniques, smaller scale operators will be able to achieve achieve or better the cost and energy efficiencies, of hyperscale data centres.
We all recognise that PUE has limitations as a metric, however in this article and for dissemination we will continue to use PUE as a comparative measure as it is still widely understood.
Current Data Centre Energy Cost
In 2019 the commercial data centre ‘norm’ has a PUE of 1.8* - so the average data centre requires about 800 watts of energy to cool each kilowatt of IT power. Making the cooling bill 80% of the total IT energy usage.
Specialist hyperscale data centres, such as Facebook, have achieved a PUE as low as 1.15 – meaning they are using 15% extra energy to power the cooling and IT infrastructure. Most commercial data centres would regard this sort of PUE as “world leading”.
As I will explain later in this article, BTDC-1 uses similar principles of power distribution and fresh air cooling as the Facebook OCP style of data centre and but demonstrated that on a small scale the research project has improved significantly on this figure – in fact we have smashed it out of the park!
At BTDC-1, one of the main economic features is the use of EcoCooling’s direct adiabatic (evaporative) and free cooling technology, which produces the cooling effect without requiring an expensive conventional refrigeration plant.
This brings two facets to the solution at BTDC-1. Firstly, in the very hot or very cold, dry days, the ‘single box approach’ of EcoCoolers can switch to adiabatic mode and provide as much cooling or humidification as necessary to maintain the IT equipment environmental conditions within the ASHRAE ‘ideal’ envelope, 100% of the time.
Exciting First Results
With the cooling and humidification approach I’ve just outlined, we were able to produce very exciting results.
Instead of the commercial data centre norm of PUE 1.8 or 80% extra energy used for cooling. We have been achieving a PUE of less than 1.05, lower than the published values of some data centre operators using ‘single-purpose’ servers – but we’ve done it with General Purpose OCP servers. We’ve also achieved the same PUE using high density ASIC servers favoured for bitcoin and blockchain.
This is an amazing development in the cost and carbon footprint reduction of the data centres. Let’s quickly look at the economics of that applied to a typical 100kW medium size data centre. The cooling energy cost is dropped from £80,000 to a mere £5,000. That’s a £75,000 per year saving in an average 100kW medium size commercial data centre.
Pretty amazing cost (and carbon) savings I’m sure you’d agree.
Smashing 1.05 PUE
What we did next has had truly phenomenal results and I believe presents a real ‘wake-up’ call to conventional server manufacturers - if they are ever to get serious about total cost of ownership and global data centre energy usage.
You may know that within every server, there are multiple temperature sensors which feed into algorithms to control the internal fans. Mainstream servers don’t yet make this temperature information available outside the server.
However, one of the three ‘pods’ within BTDC-1 is kitted out with about 140kW of Open-Compute servers. One of the strengths of the partners in this project is that average server measurements have been made accessible to the cooling system. At EcoCooling, we have taken all of that temperature information into the cooling system’s process controllers (without needing any extra hardware). Normally, processing the cooling systems are separate with inefficient time-lags and wasted energy. We have made them close-coupled and able to react to load changes in milliseconds rather than minutes.
As a result, we now have BTDC-1 “Pod 1” operating with a PUE of not 1.8, not 1.05, but 1.03!
Pod 1 – Boden Type DC One
The BTDC-1 project has demonstrated a robust repeatable strategy for reducing the energy cost of cooling a data centre from £80,000 to a tiny £3,000 - per 100kW of IT load.
This represents a saving of £77,000 a year for a typical 100kW data centre. Now consider the cost and environmental implication of this on the hundreds of new data centres anticipated to be rolled out to support 5G and “edge” deployment.
Call to Intel, Dec, Dell, HP, Nvidia et al
At BTDC-1, we have three research pods. Pod 2 is empty - waiting for one or more of the mainstream server manufacturers to step up to the “global data centre efficiency” plate and get involved.
The opportunity for EcoCooling to work with RISE (Swedish institute of computer science) and German research institute Fraunhofer has allowed us to provide independent analysis and validation of what can be achieved using direct fresh air cooling.
The initial results are incredibly promising and considering we are only half way through the project we are excited to see what additional efficiencies can be achieved.
So come on Intel, Dec, Dell, HP, Nvidia and others: Who’s brave enough to get involved?
As the power of computers continues to climb exponentially, so too does the need for more sophisticated equipment to keep them cool. Tim Mitchell, sales director of Klima-Therm, examines the latest data centre cooling developments
Digital information technology is the fastest-growing communication tool ever invented. It took radio broadcasters 38 years to reach an audience of 50 million, television took 13 years, but the internet just four years.
In 1995 there were 20 million internet users. By the year 2000 this had grown to 400 million. As of April 2019, 56.1% of the world’s population has internet access. And there is no sign of this phenomenal growth letting up. Annual global internet traffic is predicted to reach 3.3 trillion gigabytes by 2021 (having rocketed 127-fold since 2005).
Demand has been driven by the financial sector, especially by merchant banks, which are procuring ever larger data centres. To handle this colossal amount of information, there has been a remarkable leap in the number of blade servers (and the data centres to house them) constructed around the world.
This growth has also been heavily influenced by a staggering growth in online broadcasting, with video streaming having had a dramatic impact on the demand for close control air conditioning equipment.
However, mega data centres are not the only way to handle data from streaming sites such as Netflix, BBC iPlayer and Amazon Prime. One idea to drive greater efficiencies which is based, at least in part, around the streaming requirement is the increasing popularity of smaller data centres on the edge of towns.
These can serve video on demand requirements for, say, new films and binge-watch box sets locally, rather than from a distant, remote data centre, far removed from the populace. These local data centres offer significant macro energy saving potential because, rather than rejecting the heat, it can be used for a district heating loop or to serve neighbouring properties.
For example, we completed a data centre project in Tower Hamlets, London where planning conditions dictated that the client install a heat recovery loop to serve a residential development adjacent to the data centre.
This is a simple starting point for part of the ‘smart cities’ concept: the by-product of one process being used to do another job. Four-pipe Rhoss EXP heat pumps, for example, allow offer simultaneous and independent cooling and heating from the same plant. This means that a cooling system can easily become a complete cooling and heating solution, even including domestic hot water production, thereby obtaining a double output from a single unit with one expense.
Hundreds of similar heat pump systems have been built in the last 15 years with these so-called ‘polyvalent’ units in residential and commercial buildings, office buildings, industrial complexes, hospitals, clinics and accommodation in general.
Unlike basic heat pumps or heat recovery, these EXP-style units can offer simultaneous or independent cooling and heating. The concurrent production of chilled water and hot water in the heat recovery mode effectively doubles the combined efficiency of the unit.
They can produce hot water up to 55°C, which can be boosted locally if required with high-temperature heat pumps, immersion systems or point of use heaters. The carbon footprint of the electricity grid has reduced dramatically in recent years, so this is now a better solution that burning fossil fuels on site. In winter, air-cooled units can operate with outdoor air temperature limits of -10°C with hot water production up to 50°C.
Air-cooled four-pipe EXP polyvalent units have three heat exchangers; one which is always used for the for chilled water, one which is always used for the low temperature hot water [LTHW] and the airside coils to provide the atmospheric heat-sink in case the demands for cooling and heating are not in balance.
The six-pipe system adds a third water heat exchanger to capture the super-heat from the compressed gas and create water up to 70°C.
The biggest constraint on the growth of computing power in business is not the technology itself; often it is the cooling for cutting edge IT developments such as blade servers.
IT managers are faced with a real problem – how to ensure the performance and reliability of computers is maintained, while effectively managing the dramatic increases in power requirements due to cooling.
Data centres typically operate on a two-year upgrade cycle and each time the equipment changes so do its cooling demands. Data centre cooling solutions must therefore be flexible and adaptable on a scale not faced by any other air conditioning application.
On top of this, more and more computing power is being compressed into ever smaller spaces and IT user demands are still rising. If an existing facility needs more computing power, it might be possible to make the facility work harder by packing more servers into a smaller footprint thus extending the life of the building.
This – and the fact that these mission critical systems tend to operate 24 hours a day, seven days a week – places enormous pressure on data centre cooling equipment suppliers to offer ever higher cooling capacities and densities at ever greater efficiencies.
And that drives the demand for more compact cooling units. Klima-Therm’s Althermo DMR-H, for example, is a new hybrid adiabatic dry cooler, designed in the form of a compact cylindrical module. It delivers exceptional energy efficiency and application flexibility thanks to several important innovations in components, design and control.
Thanks to its distinctive shape, it can draw air in from all around and can be installed very close to walls. The combination of these elements allows the installation area of the dry cooler to reduce by 75%.
The Althermo DMR-H dry cooler can be used as a single module or multiplexed to create a compact array, providing high performance, high efficiency cooling up to 2.3MW in a single assembly. The same innovative design features are used for a refrigerant-based remote condenser version of the product, designated CMR, making a complete range for all typical heat rejection requirements.
The standard version is provided with electrically commutated (EC) fans, but AC fans are also an option.
Other options include acoustic treatment for whisper-quiet operation, a protective coating for aggressive atmospheres and closed-circuit operation through the installation of another plate heat exchanger with recirculating pump for glycol-free free-cooling in existing systems.
But be warned – you can’t simply install traditional air conditioning to cool computers. Close control air conditioning is really the only solution for environments with a heavy IT load because they balance relative humidity and temperature as well as ensuring there is adequate, well controlled airflow to the occupied zones.
One of the main factors that marks out data centre cooling from comfort air conditioning is the need for complete reliability. You do not want to be the person in charge of servers at a large banking group if they go down.
Back-up and alarm systems must, of course, be integrated so the end user and maintenance contractor are aware of a problem as soon as it happens. Indeed, redundancy requirements might mean you need a system with no ‘single points of failure’ for the most critical of sites, typically designated ‘Tier 4’.
Four steps that boost energy efficiency
As part of an overall data centre efficiency strategy, there are immediate steps that can be taken to improve the efficiency in this environment:
Power usage in data centers represents a steadily larger share of the global electricity consumption. A recent figure for the US puts data center electricity use at 1.8% of the national total. A large fraction of the energy use above what the actual computer equipment is using comes from cooling. Another environmental consideration is clean water used for evaporative cooling. Many schemes to reduce the data center power usage efficiency (PUE) towards one, including use of artificial intelligence.
One of the most important requirements in order to reduce your cooling costs is to measure the conditions properly in the first place. The first things to consider are:
There are a few types of Humidity and Temperature transmitters that are typically used in data centers.
Outdoor Humidity Sensors
The outdoor humidity and temperature sensors are used with airside economizers and with cooling towers. The most advanced economizer control paradigm is to use the differential enthalpy (heat content). You measure the enthalpy of the outdoor air and the return air to control when to recondition hot return air and when to use outdoor air.
Outdoor humidity sensors with wet-bulb temperature output indicate directly when evaporative coolers can be used. The wet-bulb temperature indicates the temperature that can be reached with evaporative cooling. If the outdoor humidity is too high the rate of evaporation is low and the cooling effect too low.
One of the most important parts of an outdoor humidity and temperature sensor is the solar radiation shield. The purpose of the solar radiation shield is to reduce the influence of heat from the sun disturbing the measurement. Seemingly small design changes can easily cause 1-2°C extra heating in unfavorable conditions.
Outdoor sensors are also subjected to everything Mother Nature might throw at them, including icing rain, and heavy winds. A data center runs 24/7 around the year; you do not want to see failures!
A proper outdoor humidity sensor needs to have a good solar radiation shield. Observe the black lower surfaces of the plates, which are essential for a good radiation shield.
Duct Humidity Sensors
The duct humidity and temperature sensors are used to duct and air-handling units to measure control the condition of incoming air and measure the return air from the data center. These are used as to complement to the outdoor humidity sensor so that the enthalpy difference between return air and outdoor air can be calculated. Some of the duct sensors may be subjected to harsh conditions in humidifiers or in inlet air ducts.
Consider also how you will make periodic checks when you install the devices. It is often easy to add a port for a reference probe during installation. In this way, you can easily introduce a reference probe to the duct and compare the reading to the duct sensor.
Wall or Space Humidity Sensors
Wall or space sensors measure the actual conditions inside the data center. Humidity conditions are usually benign. However, the rate of change can be fast in response to load level fluctuations and when switching between reconditioned air and free cooling. As the airflow rate around these sensors are typically slower than for duct sensors the response time to temperature changes are slower. There might also be outgassing from cables and other equipment running at ever-higher design temperatures that may cause drift in some humidity sensors. With fast temperature fluctuations it might be a better choice to use dew point temperature as humidity control parameter as it doesn´t depend on the temperature on the sensor.
You also need to consider what conditions you are measuring and using for control purposes as the temperature and humidity will be dramatically different in before and after the heat load. (Cold or warm aisles). You can get high quality instruments that measure the condition with high accuracy, devices with 0.1°C and 1%RH accuracy are readily available, but moving the sensor slightly can cause much larger changes
Even small measurement errors can cause significant increases in your energy bill. It pays to get quality instruments and maintain the measurements in good condition. Careful consideration of the installation location also pays off.